News Posts matching #Rendering

Return to Keyword Browsing

NVIDIA Blackwell RTX and AI Features Leaked by Inno3D

NVIDIA's RTX 5000 series GPU hardware has been leaked repeatedly in the weeks and months leading up to CES 2025, with previous leaks tipping significant updates for the RTX 5070 Ti in the VRAM department. Now, Inno3D is apparently hinting that the RTX 5000 series will also introduce updated machine learning and AI tools to NVIDIA's GPU line-up. An official CES 2025 teaser published by Inno3D, titled "Inno3D At CES 2025, See You In Las Vegas!" makes mention of potential updates to NVIDIA's AI acceleration suite for both gaming and productivity.

The Inno3D teaser specifically points out "Advanced DLSS Technology," "Enhanced Ray Tracing" with new RT cores, "better integration of AI in gaming and content creation," "AI-Enhanced Power Efficiency," AI-powered upscaling tech for content creators, and optimizations for generative AI tasks. All of this sounds like it builds off of previous NVIDIA technology, like RTX Video Super Resolution, although the mention of content creation suggests that it will be more capable than previous efforts, which were seemingly mostly consumer-focussed. Of course, improved RT cores in the new RTX 5000 GPUs is also expected, although it will seemingly be the first time NVIDIA will use AI to enhance power draw, suggesting that the CES announcement will come with new features for the NVIDIA App. The real standout feature, though, are called "Neural Rendering" and "Advanced DLSS," both of which are new nomenclatures. Of course, Advanced DLSS may simply be Inno3D marketing copy, but Neural Rendering suggests that NVIDIA will "Revolutionize how graphics are processed and displayed," which is about as vague as one could be.

AMD Designs Neural Block Compression Tech for Games: Smaller Downloads and Updates

AMD is developing a new technology that promises to significantly reduce the size on disk of games, as well as reduce the size of game patches and updates. Today's AAA games tend to be over a 100 GB in size, with game updates running into tens of gigabytes, with some of the major updates practically downloading the game all over again. Upcoming games like Call of Duty: Black Ops 6 is reportedly over 300 GB in size, which pushes the game away from those with anything but Internet connections with hundreds of Mbps in speeds. Much of the bulk of the game is made up of visual assets—textures, sprites, and cutscene videos. A modern AAA title could have hundreds of thousands of individual game assets, and sometimes even redundant sets of textures for different image quality settings.

AMD's solution to this problem is the Neural Block Compression technology. The company will get into the nuts and bolts of the tech in its presentation at the 2024 Eurographics Symposium on Rendering (July 3-5), but we have a vague idea of what it could be. Modern games don't drape surfaces of a wireframe with a texture, but also additional layers, such as specular maps, normal maps, roughness maps, etc). AMD's idea is to "flatten" all these layers, including the base texture, into a single asset format, which the game engine could disaggregate into the individual layers using an AI neural network. This is not to be confused with mega-textures—something entirely different, which relies on a single large texture covering all objects in a scene. The idea here is to flatten the various data layers of individual textures and their maps, into a single asset type. In theory, this should yield significant file-size savings, even if it results in some additional compute cost on the client's end.

Unreal Engine 5.4 is Now Available With Improvements to Nanite, AI and Machine Learning, TSR, and More

Unreal Engine 5.4 is here, and it's packed with new features and improvements to performance, visual fidelity, and productivity that will benefit game developers and creators across industries. With this release, we're delivering the toolsets we've been using internally to build and ship Fortnite Chapter 5, Rocket Racing, Fortnite Festival, and LEGO Fortnite. Here are some of the highlights.

Animation
Character rigging and animation authoring
This release sees substantial updates to Unreal Engine's built-in animation toolset, enabling you to quickly, easily, and enjoyably rig characters and author animation directly in engine, without the frustrating and time-consuming need to round trip to external applications. With an Experimental new Modular Control Rig feature, you can build animation rigs from understandable modular parts instead of complex granular graphs, while Automatic Retargeting makes it easier to get great results when reusing bipedal character animations. There are also extensions to the Skeletal Editor and a suite of new deformer functions to make the Deformer Graph more accessible.

Khronos Publishes Vulkan Roadmap 2024, Highlights Expanded 3D Features

Today, The Khronos Group, an open consortium of industry-leading companies creating advanced interoperability standards, announced the latest roadmap milestone for Vulkan, the cross-platform 3D graphics and compute API. The Vulkan roadmap targets the "immersive graphics" market, made up of mid- to high-end smartphones, tablets, laptops, consoles, and desktop devices. The Vulkan Roadmap 2024 milestone captures a set of capabilities that are expected to be supported in new products for that market, beginning in 2024. The roadmap specification provides a significant increase in functionality for the targeted devices and sets the evolutionary direction of the API, including both new hardware capabilities and improvements to the programming model for Vulkan developers.

Vulkan Roadmap 2024 is the second milestone release on the Vulkan Roadmap. Products that support it must be Vulkan 1.3 conformant and support the extensions and capabilities defined in both the 2022 and 2024 Roadmap specifications. Vulkan roadmap specifications use the Vulkan Profile mechanism to help developers build portable Vulkan applications; roadmap requirements are expressed in machine-readable JSON files, and tooling in the Vulkan SDK auto-generates code that makes it easy for developers to query for and enable profile support in their applications.

Pixelworks & NetEase Partner Up on Entirely New Visual Mobile Experiences

Pixelworks, Inc., a leading provider of visual processing solutions, today announced that it has partnered with a flagship game IP, Revelation, developed by the Thunderfire Business Group, a game studio of the world's leading game developer, NetEase Games, and provided visual processing solutions to optimize the display quality of the mobile version of the game. The Revelation Mobile integrates Pixelworks' Rendering Accelerator SDK to significantly enhance the visual quality for mobile gaming. Coupled with a Pixelworks X7 visual processor, the Rendering Accelerator works as a bridge, delivering an exceptional 120 FPS visual experience on mobile devices with low power consumption, enabling a more immersive gaming experience for mobile gamers.

Revelation Mobile is a flagship mobile game developed by NetEase Games. Centered around the theme of oriental fantasy, the game creates a 3D fantasy world where characters can soar to the heights of the heavens and plunge into the depths of the seas. It faithfully recreates the vast and beautiful Cloud Song Continent. The game boasts stunning and eye-catching visuals, from the boundless cloudscapes to the enchanting underwater scenes, all meticulously crafted to bring the world to life. Rich colors, together with impressive lighting effects, make the Cloud Song Continent, a realm where diverse human landscapes intertwine with natural beauty, even more vivid and vibrant. Players follow their characters on a journey of growth, where navigating vast expanses of distant skies and deep oceans allows them to embrace innate sense of freedom, while simultaneously enduring the challenges of pioneering exploration and harvesting unique life experiences through blocking strategies and gaming moves.

Maxon Introduce Cinebench 2024

Maxon, developers of professional software solutions for editors, filmmakers, motion designers, visual effects artists and creators of all types, is thrilled to announce the highly anticipated release of Cinebench 2024. This latest iteration of the industry-standard benchmarking software, which has been a cornerstone in computer performance evaluation for two decades, sets a new standard for performance evaluation, embracing cutting-edge technology to provide artists, designers, and creators with a more accurate and relevant representation of their hardware capabilities.

Redshift Rendering Engine Integration
Cinebench 2024 ushers in a new era by embracing the power of Redshift, Cinema 4D's default rendering engine. Unlike its predecessors, which utilized Cinema 4D's standard renderer, Cinebench 2024 utilizes the same render algorithms across both CPU and GPU implementations. This leap to the Redshift engine ensures that performance testing aligns seamlessly with the demands of modern creative workflows, delivering accurate and consistent results.

NVIDIA Previews Ray Tracing: Overdrive Mode in Cyberpunk 2077, Update Arriving April 11

CD PROJEKT RED's Cyberpunk 2077 is already one of the most technologically advanced games available, using several ray tracing techniques to render its neon-illuminated environments and vast Night City visuals at incredible levels of detail. On April 11th, a new Cyberpunk 2077 update will hit the streets, featuring the technology preview of the Ray Tracing: Overdrive Mode, which enhances the game's already-amazing visuals with full ray tracing, otherwise known as path tracing.

Full ray tracing accurately simulates light throughout an entire scene. It is used by visual effects artists to create film and TV graphics that are indistinguishable from reality, but until the arrival of GeForce RTX GPUs with RT Cores, and the AI-powered acceleration of NVIDIA DLSS, real-time video game full ray tracing was impossible because it's extremely GPU intensive.

Intel Arc A370M Graphics Card Tested in Various Graphics Rendering Scenarios

Intel's Arc Alchemist graphics cards launched in laptop/mobile space, and everyone is wondering just how well the first generation of discrete graphics performs in actual, GPU-accelerated workloads. Tellusim Technologies, a software company located in San Diego, has managed to get ahold of a laptop featuring an Intel Arc A370M mobile graphics card and benchmark it against other competing solutions. Instead of using Vulkan API, the team decided to use D3D12 API for tests, as the Vulkan usually produces lower results on the new 12th generation graphics. With the 30.0.101.1736 driver version, this GPU was mainly tested in the standard GPU working environment like triangles and batches. Meshlet size is set to 69/169, and the job is as big as 262K Meshlets. The total amount of geometry is 20 million vertices and 40 million triangles per frame.

Using the tests such as Single DIP (drawing 81 instances with u32 indices without going to Meshlet level), Mesh Indexing (Mesh Shader emulation), MDI/ICB (Multi-Draw Indirect or Indirect Command Buffer), Mesh Shader (Mesh Shaders rendering mode) and Compute Shader (Compute Shader rasterization), the Arc GPU produced some exciting numbers, measured in millions or billions of triangles. Below, you can see the results of these tests.

Adobe MAX 2021: Unleashing Creativity for All with the Next Generation of Creative Cloud

Today, Adobe kicked off Adobe MAX 2021, the largest creativity conference in the world. The company delivered innovation across Creative Cloud flagship applications and introduced new collaboration capabilities to fuel new levels of creativity for millions of customers worldwide, from students to social media creators to creative professionals.

At Adobe MAX, the company announced major updates across Creative Cloud flagship applications powered by Adobe Sensei, accelerated the video creation process with the addition of Frame.io and advanced 3D and immersive authoring abilities. Adobe also previewed new collaboration capabilities with the introduction of Creative Cloud Canvas, Creative Cloud Spaces and betas of Photoshop and Illustrator on the web.

Linux Foundation to Form New Open 3D Foundation

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced an intent to form the Open 3D Foundation to accelerate developer collaboration on 3D game and simulation technology. The Open 3D Foundation will support open source projects that advance capabilities related to 3D graphics, rendering, authoring, and development. As the first project governed by the new foundation, Amazon Web Services, Inc. (AWS) is contributing an updated version of the Amazon Lumberyard game engine as the Open 3D Engine (O3DE), under the permissive Apache 2.0 license. The Open 3D Engine enables developers and content creators to build 3D experiences unencumbered by commercial terms and will provide the support and infrastructure of an open source community through forums, code repositories, and developer events. A developer preview of O3DE is available on GitHub today. For more information and/or to contribute, please visit: https://o3de.org

3D engines are used to create a range of virtual experiences, including games and simulations, by providing capabilities such as 3D rendering, content authoring tools, animation, physics systems, and asset processing. Many developers are seeking ways to build their intellectual property on top of an open source engine where the roadmap is highly visible, openly governed, and collaborative to the community as a whole. More developers look to be able to create or augment their current technological foundations with highly collaborative solutions that can be used in any development environment. O3DE introduces a new ecosystem for developers and content creators to innovate, build, share, and distribute immersive 3D worlds that will inspire their users with rich experiences that bring the imaginations of their creators to life.

AMD Announces Radeon Pro VII Graphics Card, Brings Back Multi-GPU Bridge

AMD today announced its Radeon Pro VII professional graphics card targeting 3D artists, engineering professionals, broadcast media professionals, and HPC researchers. The card is based on AMD's "Vega 20" multi-chip module that incorporates a 7 nm (TSMC N7) GPU die, along with a 4096-bit wide HBM2 memory interface, and four memory stacks adding up to 16 GB of video memory. The GPU die is configured with 3,840 stream processors across 60 compute units, 240 TMUs, and 64 ROPs. The card is built in a workstation-optimized add-on card form-factor (rear-facing power connectors and lateral-blower cooling solution).

What separates the Radeon Pro VII from last year's Radeon VII is full double precision floating point support, which is 1:2 FP32 throughput compared to the Radeon VII, which is locked to 1:4 FP32. Specifically, the Radeon Pro VII offers 6.55 TFLOPs double-precision floating point performance (vs. 3.36 TFLOPs on the Radeon VII). Another major difference is the physical Infinity Fabric bridge interface, which lets you pair up to two of these cards in a multi-GPU setup to double the memory capacity, to 32 GB. Each GPU has two Infinity Fabric links, running at 1333 MHz, with a per-direction bandwidth of 42 GB/s. This brings the total bidirectional bandwidth to a whopping 168 GB/s—more than twice the PCIe 4.0 x16 limit of 64 GB/s.

NVIDIA Develops Tile-based Multi-GPU Rendering Technique Called CFR

NVIDIA is invested in the development of multi-GPU, specifically SLI over NVLink, and has developed a new multi-GPU rendering technique that appears to be inspired by tile-based rendering. Implemented at a single-GPU level, tile-based rendering has been one of NVIDIA's many secret sauces that improved performance since its "Maxwell" family of GPUs. 3DCenter.org discovered that NVIDIA is working on its multi-GPU avatar, called CFR, which could be short for "checkerboard frame rendering," or "checkered frame rendering." The method is already secretly deployed on current NVIDIA drivers, although not documented for developers to implement.

In CFR, the frame is divided into tiny square tiles, like a checkerboard. Odd-numbered tiles are rendered by one GPU, and even-numbered ones by the other. Unlike AFR (alternate frame rendering), in which each GPU's dedicated memory has a copy of all of the resources needed to render the frame, methods like CFR and SFR (split frame rendering) optimize resource allocation. CFR also purportedly offers lesser micro-stutter than AFR. 3DCenter also detailed the features and requirements of CFR. To begin with, the method is only compatible with DirectX (including DirectX 12, 11, and 10), and not OpenGL or Vulkan. For now it's "Turing" exclusive, since NVLink is required (probably its bandwidth is needed to virtualize the tile buffer). Tools like NVIDIA Profile Inspector allow you to force CFR on provided the other hardware and API requirements are met. It still has many compatibility problems, and remains practically undocumented by NVIDIA.

EA Reveals Next-Generation Hair Rendering for Frostbite

In the gaming industry, everything is evolving around game graphics. GPUs are integrating new technologies such as ray tracing, there are tons of software dedicated to making in-game illustrations look as realistic as possible. Electronic Arts, one of the game publishing companies decided to release a state of the art AAA games, today revealed an update to DICE's Frostbite engine.

DICE's Frostbite engine is powering many of today's AAA titles such as Battlefield V, Anthem and Star Wars Battlefront. Today it got a big update. EA released new capabilities to render the hair of in-game characters with almost real-life realism. This is pretty impressive considering that hair is very difficult to model artificially. Being one of the most interesting topics for game developers, good hair animations are extremely important to achieving the lifelike look newer AAA titles are targeting.

Intel Launches Free Open Image Denoise Library for Ray-tracing

De-noising is a vital post-processing component of ray-traced images, as it eliminates visual noise generated by too few rays intersecting pixels that make up an image. In an ideal world, a ray should hit every pixel on the screen, but in the real world, computing hasn't advanced enough to do that in reasonable/real-time. Denoising attempts to correct and reconstruct such images. Intel today launched a free Open Image Denoise (OIDN) library for ray-tracing.

Governed by the Apache 2.0 license, OIDN is part of Intel Rendering framework. From the looks of it, the library is CPU-based, and leverages 64-bit x86 CPU (scaling with multi-core and exotic instruction-sets), to de-noise images. Intel says OIDN works on any device with a 64-bit x86 processor (with at least SSE4.2 instruction-set), although it can take advantage of AVX2 and AVX-512 to speed things up by an order of magnitude. The closest (and closed) alternative to OIDN would be NVIDIA's AI De-noiser. NVIDIA "Turing" GPUs use a combination of ad-hoc deep-learning neural networks and GPU compute to de-noise. You can freely access OIDN on Intel's Git.

Distributed GPU Rendering on the Blockchain is The New Normal, and It's Much Cheaper Than AWS

Otoy, based in Los Angeles, announced a few months ago the launch of RNDR, a cloud rendering platform that is based on the same blockchain used on the Ethereum platform. The idea is simple: it leverages a distributed network of idle GPUs to render graphics more quickly and efficiently.

The solution takes advantage of the unused power of our GPUs, and allows those who need to render images at full speed to do so through this platform. RNDR distributes the revenue through its own blockchain in a decentralized fashion, and in a recent survey of 1,200 of its contributors Otoy said it has the world's largest cloud rendering platform. One that has even been praised by Hollywood director and producer J.J. Abrams, founder of Brave and Basic Attention Token Brendan Eich, and famed talent agent Ari Emanuel.
Return to Keyword Browsing
Dec 17th, 2024 23:30 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts