News Posts matching #API

Return to Keyword Browsing

AMD Software Adrenalin 22.7.1 Released, Includes OpenGL Performance Boost and AI Noise-Suppression

AMD on Tuesday released the AMD Software Adrenalin 22.7.1 drivers, which include several major updates to the feature-set. To begin with, AMD has significantly updated its OpenGL ICD (installable client driver), which can have an incredible 79 percent increase in frame-rates at 4K with "Fabulous" settings, as measured on the flagship RX 6950 XT, and up to 75 percent, as measured on the entry-level RX 6400. Also debuting is AMD Noise Suppression, a new feature that lets you clear out your voice-calls and in-game voice-chats. The software leverages AI to filter out background noises that don't identify as the prominent foreground speech. Radeon Super Resolution support has been extended to RX 5000 series and RX 6000 series GPUs running on Ryzen processor notebooks with Hybrid graphics setups.

Besides these, Adrenalin 22.7.1 adds optimization for "Swordsman Remake," support for Radeon Boost plus VRS with "Elden Ring," "Resident Evil VIII," and "Valorant." The drivers improve support for Windows 11 22H2 Update, and Agility SDK 1.602 and 1.607. A few more Vulkan API extensions are added with this release. Among the handful issues fixed are lower-than-expected F@H performance on RX 6000 series, Auto Undervolt disabling idle-fan-stop; "Hitman 3" freezing when switching between windows in exclusive fullscreen mode; blurry web video upscaling on certain RX 6000 series cards, and Enhanced Sync locking framerates to 15 FPS with video playback on extended monitors.

DOWNLOAD: AMD Software Adrenalin 22.7.1

Intel Releases Open Source AI Reference Kits

Intel has released the first set of open source AI reference kits specifically designed to make AI more accessible to organizations in on-prem, cloud and edge environments. First introduced at Intel Vision, the reference kits include AI model code, end-to-end machine learning pipeline instructions, libraries and Intel oneAPI components for cross-architecture performance. These kits enable data scientists and developers to learn how to deploy AI faster and more easily across healthcare, manufacturing, retail and other industries with higher accuracy, better performance and lower total cost of implementation.

"Innovation thrives in an open, democratized environment. The Intel accelerated open AI software ecosystem including optimized popular frameworks and Intel's AI tools are built on the foundation of an open, standards-based, unified oneAPI programming model. These reference kits, built with components of Intel's end-to-end AI software portfolio, will enable millions of developers and data scientists to introduce AI quickly and easily into their applications or boost their existing intelligent solutions."

AMD WMMA Instruction is Direct Response to NVIDIA Tensor Cores

AMD's RDNA3 graphics IP is just around the corner, and we are hearing more information about the upcoming architecture. Historically, as GPUs advance, it is not unusual for companies to add dedicated hardware blocks to accelerate a specific task. Today, AMD engineers have updated the backend of the LLVM compiler to include a new instruction called Wave Matrix Multiply-Accumulate (WMMA). This instruction will be present on GFX11, which is the RDNA3 GPU architecture. With WMMA, AMD will offer support for processing 16x16x16 size tensors in FP16 and BF16 precision formats. With these instructions, AMD is adding new arrangements to support the processing of matrix multiply-accumulate operations. This is closely mimicking the work NVIDIA is doing with Tensor Cores.

AMD ROCm 5.2 API update lists the use case for this type of instruction, which you can see below:
rocWMMA provides a C++ API to facilitate breaking down matrix multiply accumulate problems into fragments and using them in block-wise operations that are distributed in parallel across GPU wavefronts. The API is a header library of GPU device code, meaning matrix core acceleration may be compiled directly into your kernel device code. This can benefit from compiler optimization in the generation of kernel assembly and does not incur additional overhead costs of linking to external runtime libraries or having to launch separate kernels.

rocWMMA is released as a header library and includes test and sample projects to validate and illustrate example usages of the C++ API. GEMM matrix multiplication is used as primary validation given the heavy precedent for the library. However, the usage portfolio is growing significantly and demonstrates different ways rocWMMA may be consumed.

Intel Arc A370M Graphics Card Tested in Various Graphics Rendering Scenarios

Intel's Arc Alchemist graphics cards launched in laptop/mobile space, and everyone is wondering just how well the first generation of discrete graphics performs in actual, GPU-accelerated workloads. Tellusim Technologies, a software company located in San Diego, has managed to get ahold of a laptop featuring an Intel Arc A370M mobile graphics card and benchmark it against other competing solutions. Instead of using Vulkan API, the team decided to use D3D12 API for tests, as the Vulkan usually produces lower results on the new 12th generation graphics. With the 30.0.101.1736 driver version, this GPU was mainly tested in the standard GPU working environment like triangles and batches. Meshlet size is set to 69/169, and the job is as big as 262K Meshlets. The total amount of geometry is 20 million vertices and 40 million triangles per frame.

Using the tests such as Single DIP (drawing 81 instances with u32 indices without going to Meshlet level), Mesh Indexing (Mesh Shader emulation), MDI/ICB (Multi-Draw Indirect or Indirect Command Buffer), Mesh Shader (Mesh Shaders rendering mode) and Compute Shader (Compute Shader rasterization), the Arc GPU produced some exciting numbers, measured in millions or billions of triangles. Below, you can see the results of these tests.

Intel Arc Alchemist GPUs Get Vulkan 1.3 Compatibility

A part of the process of building a graphics card is designing compatibility to execute the latest graphics APIs like DirectX, OpenGL, and Vulkan. Today, we have confirmation that Intel's Arc Alchemist discrete graphics cards will be compatible with Vulkan's latest iteration - version 1.3. In January, Khronos, the team behind Vulkan API, released their regular two-year update to the standard. Graphics card vendors like NVIDIA and AMD announced support immediately with their drivers. Today, the Khronos website officially lists Intel Arc Alchemist mobile graphics cards as compatible with Vulkan 1.3 with Intel Arc A770M, A730M, A550M, A370M, and A350M GPUs.

At the time of writing, there is no official announcement for the desktop cards yet. However, given that the mobile SKUs are supporting the latest standard, it is extremely likely that the desktop variants will also carry the same level of support.

Intel Releases OpenVINO 2022.1 to Advance AI Inferencing for Developers

Since OpenVINO launched in 2018, Intel has enabled hundreds of thousands of developers to dramatically accelerate AI inferencing performance, starting at the edge and extending to the enterprise and the client. Today, ahead of MWC Barcelona 2022, the company launched a new version of the Intel Distribution of OpenVINO Toolkit. New features are built upon three-and-a-half years of developer feedback and include a greater selection of deep learning models, more device portability choices and higher inferencing performance with fewer code changes.

"The latest release of OpenVINO 2022.1 builds on more than three years of learnings from hundreds of thousands of developers to simplify and automate optimizations. The latest upgrade adds hardware auto-discovery and automatic optimization, so software developers can achieve optimal performance on every platform. This software plus Intel silicon enables a significant AI ROI advantage and is deployed easily into the Intel-based solutions in your network," said Adam Burns, vice president, OpenVINO Developer Tools in the Network and Edge Group.

Intel Updates Technology Roadmap with Data Center Processors and Game Streaming Service

At Intel's 2022 Investor Meeting, Chief Executive Officer Pat Gelsinger and Intel's business leaders outlined key elements of the company's strategy and path for long-term growth. Intel's long-term plans will capitalize on transformative growth during an era of unprecedented demand for semiconductors. Among the presentations, Intel announced product roadmaps across its major business units and key execution milestones, including: Accelerated Computing Systems and Graphics, Intel Foundry Services, Software and Advanced Technology, Network and Edge, Technology Development, More: For more from Intel's Investor Meeting 2022, including the presentations and news, please visit the Intel Newsroom and Intel.com's Investor Meeting site.

Intel Adds Experimental Mesh Shader Support in DG2 GPU Vulkan Linux Drivers

Mesh shader is a relatively new concept of a programmable geometric shading pipeline, which promises to simplify the whole graphics rendering pipeline organization. NVIDIA introduced this concept with Turing back in 2018, and AMD joined with RDNA2. Today, thanks to the finds of Phoronix, we have gathered information that Intel's DG2 GPU will carry support for mesh shaders and bring it under Vulkan API. For starters, the difference between mesh/task and traditional graphics rendering pipeline is that the mesh edition is much simpler and offers higher scalability, bandwidth reduction, and greater flexibility in the design of mesh topology and graphics work. In Vulkan, the current mesh shader state is NVIDIA's contribution called the VK_NV_mesh_shader extension. The below docs explain it in greater detail:
Vulkan API documentationThis extension provides a new mechanism allowing applications to generate collections of geometric primitives via programmable mesh shading. It is an alternative to the existing programmable primitive shading pipeline, which relied on generating input primitives by a fixed function assembler as well as fixed function vertex fetch.

There are new programmable shader types—the task and mesh shader—to generate these collections to be processed by fixed-function primitive assembly and rasterization logic. When task and mesh shaders are dispatched, they replace the core pre-rasterization stages, including vertex array attribute fetching, vertex shader processing, tessellation, and geometry shader processing.

Researchers Exploit GPU Fingerprinting to Track Users Online

Online tracking of users happens when 3rd party services collect information about various people and use that to help identify them in the sea of other online persons. This collection of specific information is often called "fingerprinting," and attackers usually exploit it to gain user information. Today, researchers have announced that they managed to use WebGL (Web Graphics Library) to their advantage and create a unique fingerprint for every GPU out there to track users online. This exploit works because every piece of silicon has its own variations and unique characteristics when manufactured, just like each human has a unique fingerprint. Even among the exact processor models, silicon differences make each product distinct. That is the reason why you can not overclock every processor to the same frequency, and binning exists.

What would happen if someone were to precisely explore the differences in GPUs and use those differences to identify online users by those characteristics? This is exactly what researchers that created DrawnApart thought of. Using WebGL, they run a GPU workload that identifies more than 176 measurements across 16 data collection places. This is done using vertex operations in GLSL (OpenGL Shading Language), where workloads are prevented from random distribution on the network of processing units. DrawnApart can measure and record the time to complete vertex renders, record the exact route that the rendering took, handle stall functions, and much more. This enables the framework to give off unique combinations of data turned into fingerprints of GPUs, which can be exploited online. Below you can see the data trace recording of two GPUs (same models) showing variations.

Intel Releases oneAPI 2022 Toolkits to Developers

Intel today released oneAPI 2022 toolkits. Newly enhanced toolkits expand cross-architecture features to provide developers greater utility and architectural choice to accelerate computing. "I am impressed by the breadth of more than 900 technical improvements that the oneAPI software engineering team has done to accelerate development time and performance for critical application workloads across Intel's client and server CPUs and GPUs. The rich set of oneAPI technologies conforms to key industry standards, with deep technical innovations that enable applications developers to obtain the best possible run-time performance from the cloud to the edge. Multi-language support and cross-architecture performance acceleration are ready today in our oneAPI 2022 release to further enable programmer productivity on Intel platforms," said Greg Lavender, Intel chief technology officer, senior vice president and general manager of the Software and Advanced Technology Group.

New capabilities include the world's first unified compiler implementing C++, SYCL and Fortran, data parallel Python for CPUs and GPUs, advanced accelerator performance modeling and tuning, and performance acceleration for AI and ray tracing visualization workloads. The oneAPI cross-architecture programming model provides developers with tools that aim to improve the productivity and velocity of code development when building cross-architecture applications.

Intel Disables DirectX 12 API Loading on Haswell Processors

Intel's fourth-generation Core processors, codenamed Haswell, are subject to new security exploits. According to the company, a vulnerability exists inside the graphics controller of 4th generation Haswell processors, happening once the DirectX 12 API loading occurs. To fix the problem, Intel has found that disabling this API results in a fix. Starting with Intel graphics driver 15.40.44.5107 applications that run exclusively on DirectX 12 API no longer work with the following Intel Graphics Controllers: Intel Iris Pro Graphics 5200/5100, HD Graphics 5000/4600/4400/4200, and Intel Pentium and Celeron Processors with Intel HD Graphics based on 4th Generation Intel Core.

"A potential security vulnerability in Intel Graphics may allow escalation of privilege on 4th Generation Intel Core processors. Intel has released a software update to mitigate this potential vulnerability. In order to mitigate the vulnerability, DirectX 12 capabilities were deprecated." says the Intel page. If a user with a Haswell processor has a specific need to run the DirectX 12 application, they can downgrade their graphics driver to version 15.40.42.5063 or older.

SiPearl Partners With Intel to Deliver Exascale Supercomputer in Europe

SiPearl, the designer of the high computing power and low consumption microprocessor that will be the heart of European supercomputers, has entered into a partnership with Intel in order to offer a common offer dedicated to the first exascale supercomputers in Europe. This partnership will offer their European customers the possibility of combining Rhea, the high computing power and low consumption microprocessor developed by SiPearl, with Intel's Ponte Vecchio accelerator, thus creating a high performance computing node that will promote the deployment of the exascale supercomputing in Europe.

To enable this powerful combination, SiPearl plans to use and optimize for its Rhea microprocessor the open and unified programming interface, oneAPI, created by Intel. Using this single solution across the entire heterogeneous compute node, consisting of Rhea and Ponte Vecchio, will increase developer productivity and application performance.

Linux Foundation to Form New Open 3D Foundation

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced an intent to form the Open 3D Foundation to accelerate developer collaboration on 3D game and simulation technology. The Open 3D Foundation will support open source projects that advance capabilities related to 3D graphics, rendering, authoring, and development. As the first project governed by the new foundation, Amazon Web Services, Inc. (AWS) is contributing an updated version of the Amazon Lumberyard game engine as the Open 3D Engine (O3DE), under the permissive Apache 2.0 license. The Open 3D Engine enables developers and content creators to build 3D experiences unencumbered by commercial terms and will provide the support and infrastructure of an open source community through forums, code repositories, and developer events. A developer preview of O3DE is available on GitHub today. For more information and/or to contribute, please visit: https://o3de.org

3D engines are used to create a range of virtual experiences, including games and simulations, by providing capabilities such as 3D rendering, content authoring tools, animation, physics systems, and asset processing. Many developers are seeking ways to build their intellectual property on top of an open source engine where the roadmap is highly visible, openly governed, and collaborative to the community as a whole. More developers look to be able to create or augment their current technological foundations with highly collaborative solutions that can be used in any development environment. O3DE introduces a new ecosystem for developers and content creators to innovate, build, share, and distribute immersive 3D worlds that will inspire their users with rich experiences that bring the imaginations of their creators to life.

Intel Ponte Vecchio GPU Scores Another Win in Leibniz Supercomputing Centre

Today, Lenovo in partnership with Intel has announced that Leibniz Supercomputing Centre (LRZ) is building a supercomputer powered by Intel's next-generation technologies. Specifically, the supercomputer will use Intel's Sapphire Rapids CPUs in combination with the highly-teased Ponte Vecchio GPUs to power the applications running at Leibniz Supercomputing Centre. Along with the various processors, the LRZ will also deploy Intel Optane persistent memory to process the huge amount of data the LRZ has and is producing. The integration of HPC and AI processing will be enabled by the expansion of LRZ's current supercomputer called SuperMUG-NG, which will receive an upgrade in 2022, which will feature both Sapphire Rapids and Ponte Vecchio.

Mr. Raja Koduri, Intel graphics guru, has on Twitter teased that this supercomputer installment will represent a combination of Sapphire Rapids, Ponte Vecchio, Optane, and One API all in one machine. The system will use over one petabyte of Distributed Asynchronous Object Storage (DAOS) based on the Optane technologies. Then, Mr. Koduri has teased some Ponte Vecchio eye candy, which is a GIF of tiles combining to form a GPU, which you can check out here. You can also see some pictures of Ponte Vecchio below.
Intel Ponte Vecchio GPU Intel Ponte Vecchio GPU Intel Ponte Vecchio GPU Intel Ponte Vecchio GPU

BittWare Launches IA-840F with Intel Agilex FPGA and Support for oneAPI

BittWare, a Molex company, today unveiled the IA-840F, the company's first Intel Agilex -based FPGA card designed to deliver significant performance-per-watt improvements for next-generation data center, networking and edge compute workloads. Agilex FPGAs deliver up to 40% higher performance or up to 40% lower power, depending on application requirements. BittWare maximized I/O features using the Agilex chip's unique tiling architecture with dual QSFP-DDs (4× 100G), PCIe Gen4 x16, and three MCIO expansion ports for diverse applications. BittWare also announced support for Intel oneAPI, which enables an abstracted development flow for dramatically simplified code re-use across multiple architectures.

"Modern data center workloads are incredibly diverse, requiring customers to implement a mix of scalar, vector, matrix and spatial architectures," said Craig Petrie, vice president of marketing for BittWare. "The IA-840F ensures that customers can quickly and easily exploit the advanced features of the Intel Agilex FPGA. For those customers who prefer to develop FPGA applications at an abstracted level, we are including support for oneAPI. This new unified software programming environment allows customers to program the Agilex FPGA from a single code base with native high-level language performance across architectures."

Intel and Argonne Developers Carve Path Toward Exascale 

Intel and Argonne National Laboratory are collaborating on the co-design and validation of exascale-class applications using graphics processing units (GPUs) based on Intel Xe-HP microarchitecture and Intel oneAPI toolkits. Developers at Argonne are tapping into Intel's latest programming environments for heterogeneous computing to ensure scientific applications are ready for the scale and architecture of the Aurora supercomputer at deployment.

"Our close collaboration with Argonne is enabling us to make tremendous progress on Aurora, as we seek to bring exascale leadership to the United States. Providing developers early access to hardware and software environments will help us jumpstart the path toward exascale so that researchers can quickly start taking advantage of the system's massive computational resources." -Trish Damkroger, Intel vice president and general manager of High Performance Computing.

AMD Graphics Drivers Have a CreateAllocation Security Vulnerability

Discovering vulnerabilities in software is not an easy thing to do. There are many use cases and states that need to be tested to see a possible vulnerability. Still, security researchers know how to find those and they usually report it to the company that made the software. Today, AMD has disclosed that there is a vulnerability present in the company graphics driver powering the GPUs and making them work on systems. Called CreateAllocation (CVE-2020-12911), the vulnerability is marked with a score of 7.1 in the CVSSv3 test results, meaning that it is not a top priority, however, it still represents a big problem.

"A denial-of-service vulnerability exists in the D3DKMTCreateAllocation handler functionality of AMD ATIKMDAG.SYS 26.20.15029.27017. A specially crafted D3DKMTCreateAllocation API request can cause an out-of-bounds read and denial of service (BSOD). This vulnerability can be triggered from a guest account, " says the report about the vulnerability. AMD states that a temporary fix is implemented by simply restarting your computer if a BSOD happens. The company also declares that "confidential information and long-term system functionality are not impacted". AMD plans to release a fix for this software problem sometime in 2021 with the new driver release. You can read more about it here.

QNAP Announces Strategic Partnership with ownCloud GmbH

QNAP Systems, Inc., a leading computing, networking and storage solution innovator, and ownCloud, a leading open-source Content Collaboration Solution provider, today announced they have entered a global strategic partnership, combining QNAP's Network-attached Storage (NAS) with ownCloud's Enterprise Content Collaboration Software.

This long-term partnership focuses on providing ownCloud's Content Collaboration Solution for file sync and share, on all QNAP's NAS from QTS v4.4 onwards. In the coming months, the fully certified ownCloud Content Collaboration Solution will be available for easy installation from the QTS App Center. The installation packages are updated with every new release and continuously maintained in the interim period. Users can choose between the free Community Edition and the paid Enterprise Edition which includes premium features and professional support. Enterprise subscriptions will be available from the QNAP Software Store and upgrading from the Community to the Enterprise Edition will be straightforward and require no reinstallation.

Cyberpunk 2077 a DX12-Only Release on PC

Marcin Gollent, Lead Graphics Programmer at CD Projekt RED revealed in an interview with PC Games Hardware that cyberpunk 2077 would only support DX12 on the PC release, which means that gamers playing on either Windows 8 or (god forbid) older windows releases won't be able to partake in the cyberpunk dream of Night City. A special note to Windows 7 users though - the game will be supported on Windows 7's DX12 implementation as well. The decision to cut out other API's isn't an opaque one - Marcin Gollent himself said that DX12 was chosen as the only development target due to the fact that it's the rendering API for the Xbox family of consoles (including for the next-generation ones), and thus, a decision to streamline the rendering pipeline and API support was made.

The decision was also made, according to the developer, because DX12 is the birthplace of DXR - and CD Projekt Red has already announced that cyberpunk 2077 will be making heavy use of raytracing on the PC (and will almost certainly bring the same magic potion to the next-generation update to their yet-unreleased game). Marcin Gollent also said that the game will be compatible with all DX12 GPUs - but the DX12 Ultimate badge might be of interest to some of the hardware features that may be deployed in the final version of the game. A question, of course, could be asked regarding how some games' DX11 API actually delivers increased performance over the DX12 version. But with the game being originally developed with DX12 in mind, we'll have to believe it's the best version it could be.

DirectX Coming to Linux...Sort of

Microsoft is preparing to add the DirectX API support to WSL (Windows Subsystem for Linux). The latest Windows Subsystem for Linux 2 will virtualize DirectX to Linux applications running on top of it. WSL is a translation layer for Linux apps to run on top of Windows. Unlike Wine, which attempts to translate Direct3D commands to OpenGL, what Microsoft is proposing is a real DirectX interface for apps in WSL, which can essentially talk to hardware (the host's kernel-mode GPU driver) directly.

To this effect, Microsoft introduced the Linux-edition of DXGkrnl, a new kernel-mode driver for Linux that talks to the DXGkrnl driver of the Windows host. With this, Microsoft is promising to expose the full Direct3D 12, DxCore, and DirectML. It will also serve as a conduit for third party APIs, such as OpenGL, OpenCL, Vulkan, and CUDA. Microsoft expects to release this feature-packed WSL out with WDDM 2.9 (so a future version of Windows 10).

Khronos Group Releases OpenCL 3.0

Today, The Khronos Group, an open consortium of industry-leading companies creating advanced interoperability standards, publicly releases the OpenCL 3.0 Provisional Specifications. OpenCL 3.0 realigns the OpenCL roadmap to enable developer-requested functionality to be broadly deployed by hardware vendors, and it significantly increases deployment flexibility by empowering conformant OpenCL implementations to focus on functionality relevant to their target markets. OpenCL 3.0 also integrates subgroup functionality into the core specification, ships with a new OpenCL C 3.0 language specification, uses a new unified specification format, and introduces extensions for asynchronous data copies to enable a new class of embedded processors. The provisional OpenCL 3.0 specifications enable the developer community to provide feedback on GitHub before the specifications and conformance tests are finalized.
OpenCL

Intel iGPU+dGPU Multi-Adapter Tech Shows Promise Thanks to its Realistic Goals

Intel is revisiting the concept of asymmetric multi-GPU introduced with DirectX 12. The company posted an elaborate technical slide-deck it originally planned to present to game developers at the now-cancelled GDC 2020. The technology shows promise because the company isn't insulting developers' intelligence by proposing that the iGPU lying dormant be made to shoulder the game's entire rendering pipeline for a single-digit percentage performance boost. Rather, it has come up with innovating augments to the rendering path such that only certain lightweight compute aspects of the game's rendering be passed on to the iGPU's execution units, so it has a more meaningful contribution to overall performance. To that effect, Intel is on the path of coming up with SDK that can be integrated with existing game engines.

Microsoft DirectX 12 introduced the holy grail of multi-GPU technology, under its Explicit Multi-Adapter specification. This allows game engines to send rendering traffic to any combinations or makes of GPUs that support the API, to achieve a performance uplift over single GPU. This was met with lukewarm reception from AMD and NVIDIA, and far too few DirectX 12 games actually support it. Intel proposes a specialization of explicit multi-adapter approach, in which the iGPU's execution units are made to process various low-bandwidth elements both during the rendering and post-processing stages, such as Occlusion Culling, AI, game physics, etc. Intel's method leverages cross-adapter shared resources sitting in system memory (main memory), and D3D12 asynchronous compute, which creates separate processing queues for rendering and compute.

AMD RDNA 2 GPUs to Support the DirectX 12 Ultimate API

AMD today announced in the form of a blog post that its upcoming graphics cards based on RDNA 2 architecture will feature support for Microsoft's latest DirectX 12 Ultimate API. "With this architecture powering both the next generation of AMD Radeon graphics cards and the forthcoming Xbox Series X gaming console, we've been working very closely with Microsoft to help move gaming graphics to a new level of photorealism and smoothness thanks to the four key DirectX 12 Ultimate graphics features -- DirectX Raytracing (DXR), Variable Rate Shading (VRS), Mesh Shaders, and Sampler Feedback." - said AMD in the blog.

Reportedly, Microsoft and AMD have worked closely to enable this feature set and provide the best possible support for RDNA 2 based hardware, meaning that future GPUs and consoles are getting the best possible integration of the new API standard.
AMD RDNA 2 supports DirectX12 Ultimate AMD RDNA 2 supports DirectX12 Ultimate AMD RDNA 2 supports DirectX12 Ultimate AMD RDNA 2 supports DirectX12 Ultimate

NVIDIA GeForce RTX GPUs to Support the DirectX 12 Ultimate API

NVIDIA graphics cards, starting from the current generation GeForce RTX "Turing" lineup, will support the upcoming DirectX 12 Ultimate API. Thanks to a slide obtained by our friends over at VideoCardz, we have some information about the upcoming iteration of the DirectX 12 API made by Microsoft. In the new API revision, called "DirectX 12 Ultimate", it looks like there are some enhancements made to the standard DirectX 12 API. From the leaked slide we can see the improvements coming in the form of a few additions.

The GeForce RTX lineup will support the updated version of API with features such as ray tracing, variable-rate shading, mesh shader, and sampler feedback. While we do not know why Microsoft decided to call this the "Ultimate" version, it is possibly used to convey clearer information about which features are supported by the hardware. In the leaked slide there is a mention of consoles as well, so it is coming to that platform as well.

Khronos Group Releases Vulkan Ray Tracing

Today, The Khronos Group, an open consortium of industry-leading companies creating advanced interoperability standards, announces the ratification and public release of the Vulkan Ray Tracing provisional extensions, creating the industry's first open, cross-vendor, cross-platform standard for ray tracing acceleration. Primarily focused on meeting desktop market demand for both real-time and offline rendering, the release of Vulkan Ray Tracing as provisional extensions enables the developer community to provide feedback before the specifications are finalized. Comments and feedback will be collected through the Vulkan GitHub Issues Tracker and Khronos Developer Slack. Developers are also encouraged to share comments with their preferred hardware vendors. The specifications are available today on the Vulkan Registry.

Ray tracing is a rendering technique that realistically simulates how light rays intersect and interact with scene geometry, materials, and light sources to generate photorealistic imagery. It is widely used for film and other production rendering and is beginning to be practical for real-time applications and games. Vulkan Ray Tracing seamlessly integrates a coherent ray tracing framework into the Vulkan API, enabling a flexible merging of rasterization and ray tracing acceleration. Vulkan Ray Tracing is designed to be hardware agnostic and so can be accelerated on both existing GPU compute and dedicated ray tracing cores if available.
Vulkan ray tracing
Return to Keyword Browsing
Mar 7th, 2025 00:33 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts