News Posts matching #GPU

Return to Keyword Browsing

NVIDIA Launches DLSS 4 Plugin for Unreal Engine 5.6

NVIDIA has released its DLSS 4 plugin for Unreal Engine 5.6, allowing developers to incorporate the company's most advanced upscaling and AI-driven frame generation tools into their projects. With this update, performance improvements introduced in UE 5.6, such as optimized memory handling and refined rendering pipelines, receive the benefits of Transformer-based upscaling, which significantly lowers VRAM usage while generating extra frames to improve motion smoothness. Developers using Unreal Engine 5.6 can now integrate multi-frame generation, ray reconstruction, deep learning anti-aliasing, and super resolution in one convenient package that requires fewer resources than earlier DLSS versions.

The DLSS 4 plugin relies on a Transformer neural network that evaluates the relationships among all pixels in a frame rather than using localized convolutional filters. This technique reduces video memory requirements by approximately 15-20% compared with the prior DLSS 3 SDK, freeing up capacity for higher-resolution textures and more detailed environments. Multi-frame generation predicts intermediate frames based on recent frame history, effectively boosting perceived frame rates without additional GPU draw calls. Ray reconstruction enhances reflections and global illumination by learning from high‑quality offline renders, delivering realistic lighting with minimal performance loss. Early feedback indicates a 30-50% uplift in ray-traced effects quality in GPU-bound scenes, as well as noticeably sharper visuals under dynamic lighting. The plugin supports Unreal Engine versions 5.2 through 5.6; however, only projects running on version 5.6 can access the full suite of improvements.

NVIDIA GeForce RTX 50-series GPUs Make a Dent in Latest Steam Hardware Survey

NVIDIA's freshly completed GeForce RTX 50 family, powered by the new Blackwell architecture, has begun to register meaningful numbers in Steam's June 2025 Hardware Survey. Since first appearing in May, cards from this lineup, except for the as-yet-unavailable RTX 5050, now account for 3.69% of surveyed systems. Leading the pack among the newcomers, the RTX 5070 grabs nearly 1% of overall share, up substantially from its debut, while the RTX 5080 and RTX 5070 Ti follow closely behind. The more budget‑oriented RTX 5060 Ti and RTX 5060 have also made their mark, and even the top‑end RTX 5090 has registered on enough machines to appear in the survey.

These figures show a swift uptake by desktop gamers eager for improved performance and AI-driven features, even as the tried-and-true RTX 4060 Laptop GPU holds onto its position as the most prevalent NVIDIA part, with just under 5% of the installed base. Meanwhile, AMD's latest Radeon RX 9000 series and Intel's Arc B‑series remain absent from the survey results, suggesting that shipment volumes for those cards have not yet reached the critical mass needed to register with Valve's monthly sampling of millions of Steam users. This demonstrates NVIDIA's continued dominance in add-in board sales, where it has consistently captured over 90% of the market.

PlayStation 5 Pro to Gain Full AMD FSR 4 Integration in 2026

Sony has announced that the PlayStation 5 Pro will receive the full version of AMD's FidelityFX Super Resolution 4 upscaling technology in 2026. This upgrade delivers the exact same engine that PC users have accessed since its March release, replacing the current PlayStation Spectral Super Resolution (PSSR) system without any reduction in functionality. Coming from Project Amethyst, the update is a collaborative effort between Sony and AMD, launched alongside the PS5 Pro's late 2024 debut. In a recent interview, PlayStation lead system architect Mark Cerny confirmed that the jointly developed algorithm will be implemented on the console exactly as it appears on PC. They are providing the full feature set of FSR 4 to PS5 Pro owners, without any trimmings.

AMD introduced Sony to its comprehensive quality‑assurance practices and helped establish a dedicated QA team focused solely on upscaling performance. Participants from both companies credit this reciprocal exchange with accelerating development, achieving significant milestones in under nine months. Sony plans to distribute the FSR 4 upgrade as a free system update to all PS5 Pro users. Meanwhile, AMD will integrate lessons learned from Project Amethyst into its next RDNA 5/UDNA graphics architecture. Because neither company placed usage restrictions on the shared research, third‑party developers and hardware partners can also adopt these advancements. For PS5 Pro owners, the upgrade will provide a noticeable improvement in image clarity without compromising performance. For next-generation RDNA 5/UDNA GPUs, upscaling optimizations will be even more fine-tuned, as developers have already done the same work for the console.

Gamers Reject RTX 5060 Ti 8 GB — Outsold 16:1 by 16 GB Model

According to the only publicly available sales data from one of Germany's largest retailers, Mindfactory.de, we gain insight into the sales of NVIDIA's GeForce RTX 5060 Ti 16 GB and 8 GB models. According to the sum of the units sold, the 16 GB version of the GeForce RTX 5060 Ti is outselling the 8 GB version by more than 16 times, which represents a 1,600% difference. Mindfactory lists each GeForce RTX 5060 Ti model with a tag indicating the number of units sold, showing how many units were sold to customers of a specific model. This includes every model that Mindfactory offers, including SKUs from MSI, GIGABYTE, INNO3D, Palit, ZOTAC, ASUS, and other AIC partners. At the time of writing, the 8 GB version of the GeForce RTX 5060 Ti model has been sold in 105 instances, compared to 1675 units sold of the 16 GB version.

It is important to note that sales performance on 8 GB and 16 GB SKUs are based on Mindfactory, which is only a single regional retailer, showing only a single part of the GPU sales story. We even considered supply as an issue for the massive difference in unit sales; however, availability of both GPU SKUs is good, as checked on Geizhals.de. Most models are available in 20+ quantities, showing that availability is not hindering sales. This is not the typical AMD vs. NVIDIA sales comparison. It's within a product family, highlighting something much more specific. It signals that gamers are willing to spend a couple of dozen Euros extra to get the 16 GB version of their chosen GPU, essentially "future proofing" their system for some more demanding games. Especially as many features, like path tracing, demand more VRAM, having an extra 8 GB of memory buffer allows the GPU to run modern games with ease. For gamers who want to sell their system down the road for an upgrade, the 16 GB version will also be more prized in the secondary markets like eBay and others. We launched a new poll here, so make sure to let us know your opinion on which card you would purchase, as the battle for ever-greater VRAM capacity continues.

Palit Introduces NVIDIA GeForce RTX 5050 Dual and StormX Series Graphics Cards

As a leading innovator in graphics card technology, Palit Microsystems Ltd. is excited to unveil its latest GeForce RTX 5050 series. Powered by NVIDIA Blackwell, GeForce RTX 50 Series GPUs bring game-changing capabilities to gamers and creators. Equipped with a massive level of AI horsepower, the RTX 50 Series enables new experiences and next-level graphics fidelity. Multiply performance with NVIDIA DLSS 4, generate images at unprecedented speed, and unleash creativity with NVIDIA Studio.

GeForce RTX 5050 Dual
The Palit GeForce RTX 5050 Dual offers classic cooling efficiency in a sleek, modern black design with durable build quality. The large twin fans provide competitive cooling and quiet operation, while a compact 2-slot design suits small form factor builds.

NVIDIA's v580 Driver Branch Ends Support for Maxwell, Pascal, and Volta GPUs

NVIDIA has confirmed that its upcoming 580 driver series will be the final release to offer official updates for three of its older GPU architectures. In a recent update to its UNIX graphics deprecation schedule, the company noted that once version 580 ships, Maxwell-, Pascal-, and Volta-based products will no longer receive new drivers or patches. Although this announcement originates from NVIDIA's UNIX documentation, the unified driver codebase means Windows users will face the same cutoff. Owners of Maxwell-era GeForce GTX 700 and GTX 900 cards, Pascal's GeForce GTX 10 lineup, and the consumer-focused Volta TITAN V can expect their last set of performance optimizations and security fixes in the 580 branch.

NVIDIA further explains that ending support for these legacy chips allows its engineering teams to focus on newer hardware platforms. For example, Turing-based GeForce GTX 16 series cards will continue to receive updates beyond driver v580, ensuring gamers can benefit from the latest optimizations and stability improvements even on older platforms. First introduced between 2014 and 2017, Maxwell, Pascal, and Volta GPUs will have enjoyed eight to eleven years of official maintenance, among the longest support windows in the industry. While existing driver installations will remain operational, NVIDIA recommends that users who depend on these older cards begin planning upgrades to maintain full compatibility and access new features. At the time of this announcement, the public driver sits at version 576.80, and NVIDIA has not yet set a firm release date for the 580 series, leaving affected users a window of several months before support officially ends.

New Monster Hunter Wilds Patch Lands To Address PC Performance Issues

Monster Hunter Wilds players on PC have recently had a bit of a rough time of things when it comes to performance, with the game's recent Steam reviews seeing a number of players complaining about random stutters, FPS drops, and a general lack of optimization. Even gamers playing on relatively high-end recent GPUs, like the AMD Radeon RX 7900 XT report that their experience has been tainted by poor performance, even if the actual game content is good. With the latest Monster Hunter Wilds patch, though, Capcom is attempting to address those performance issues—and the resulting slew of negative reviews—as announced today in a post on X.

While there are new monsters, weapons, cosmetics, equipment, and other in-game content, the majority of the focus of the 1.020.00.00 update (full notes here) is on those performance updates. Capcom has changed the way shader compilation works, now making the CPU-intensive task take place the first time you run the game after an update as well updated a slew of upscaler and frame generation changes, primarily adding DLSS 4 and FSR 4 support for GPUs newer than the GeForce RTX 2000 series and AMD Radeon RX 9000 series. The new fix also allows players to mix upscaling and frame generation methods, which should allow players to better tune the game's visuals and performance. Additional fixes to the game include reduced VRAM usage from texture streaming and a more accurate calculation of estimated VRAM consumption. Steam users also now get a notification upon launching Monster Hunter Wilds if they are running an unsupported operating system or out-of-date GPU drivers, or if they are running the game in compatibility mode.

NVIDIA's Dominance Challenged as Largest AI Lab Adopts Google TPUs

NVIDIA's AI hardware dominance is challenged as the world's leading AI lab, OpenAI, taps into Google TPU hardware, showing significant efforts to move away from single-vendor solutions. In June 2025, OpenAI began leasing Google Cloud's Tensor Processing Units to handle ChatGPT's increasing inference workload. This is the first time OpenAI has relied on non-NVIDIA chips in large-scale production. Until recently, NVIDIA GPUs powered both model training and inference for OpenAI's products. Training large language models on those cards remains costly, but it was a periodic process. Inference, by contrast, runs continuously and carries its own substantial expense. ChatGPT now serves more than 100 million daily active users, including 25 million paid subscribers. Inference operations account for nearly half of OpenAI's estimated $40 billion annual compute budget. Google's TPUs, like v6e "Trillium" provide a more cost-effective solution for steady-state inference, as they are designed specifically for high throughput and low latency.

Beyond cost savings, this decision reflects OpenAI's desire to reduce reliance on any single vendor. Microsoft Azure has been its primary cloud provider since early investments and collaborations. However, GPU supply shortages and price fluctuations exposed a weakness in relying too heavily on a single source. By adding Google Cloud to its infrastructure mix, OpenAI gains greater flexibility, avoids vendor lock-in, and can scale more smoothly during usage peaks. For Google, winning OpenAI as a TPU customer offers strong validation for its in-house chip development. TPUs were once reserved almost exclusively for internal projects such as powering the Gemini model. Now, they are attracting leading organizations like Apple and Anthropic. Note that, beyond v6e inference, Google also designs TPUs for training (yet-to-be-announced v6p), which means companies can scale their entire training runs on Google's infrastructure on demand.

Vulkan API Unifies Image Layouts, Waving Goodbye to Sync Issues

In the Vulkan API, image layouts have been granularly controlled by developers, meaning that whenever you use an image in a new way, such as targeting it for transfer, feeding it into a shader, or presenting it on screen, you must explicitly transition it between different layouts. Even one barrier with the wrong layout, access mask, pipeline stage, or ownership transfer can cause race conditions, visual corruption, or GPU hangs. Managing numerous layouts and barriers only adds boilerplate and increases the risk of subtle platform-specific bugs, making it more challenging to port games to multiple platforms. Today, this is no longer the case. The Khronos Group, developers behind the Vulkan API, unveiled the VK_KHR_unified_image_layouts extension, which simplifies synchronization by promoting VK_IMAGE_LAYOUT_GENERAL as the default state for nearly all image operations.

When Vulkan 1.0 launched over a decade ago, its explicit synchronization model required engineers to specify distinct layouts. This included TRANSFER_DST_OPTIMAL, SHADER_READ_ONLY_OPTIMAL, and COLOR_ATTACHMENT_OPTIMAL for each access pattern. This granularity once maximized performance on early GPU hardware but introduced a steep learning curve and frequent bugs. With modern GPUs capable of handling many transitions internally, developers no longer need to juggle multiple layout states. Under the new model, only two scenarios still demand special treatment: initializing new images with VK_IMAGE_LAYOUT_UNDEFINED and sharing or presenting images to external queues or display systems. Now, the new extension collapses the remaining layout types into one versatile state, reduces boilerplate code, lowers the risk of synchronization errors, and minimizes unnecessary pipeline stalls, allowing the GPU to operate more efficiently. Driver support for VK_KHR_unified_image_layouts is already available in the latest GPU releases, with validation-layer integration set for the July 2025 Vulkan SDK.

Dell Announces the 16 Premium and 14 Premium Laptop Ranges

Today, Dell unveils its new lineup of flagship laptops, now under the Dell Premium name. Powered by the latest Intel Core Ultra 200H series processors, these devices deliver meaningful performance advancements designed for students, creators and entrepreneurs who rely on their PCs to fuel their ambitions and keep pace with what's next.

Staying true to the XPS tradition, the new Dell Premium laptops uphold the signature craftsmanship and innovation that customers know and love - stunning displays, elevated and smooth finishes, monochromatic colors and cutting-edge technology. The new name signals a fresh chapter—one that makes it easier than ever to find the right PC while providing the same exceptional quality, design and performance.

Eurocom Releases User-Upgradeable 17.3" Nightsky RX517 Laptop Featuring NVIDIA RTX 5070

Eurocom introduces the Nightsky RX517, a customizable and user-upgradeable 17.3-inch laptop built for professionals, creators, and power users who need a balance of screen size, flexibility, and long-term scalability. The Nightsky RX517 comes with large 17.3" internal display for these looking for larger size viewing area. At the core of the Nightsky RX517 is Intel Core Ultra 9 275HX processor with 24 cores / 24 threads and 36 MB cache, paired with the NVIDIA GeForce RTX 5070 Blackwell GPU (4608 CUDA cores, 144 Tensor AI cores, and 8 GB GDDR7 memory). This configuration is ideal for productivity workloads, 2D/3D content handling, development, multi-display environments, and smooth day-to-day performance.

Designed to support up to four active displays with NVIDIA Surround View, the Nightsky RX517 empowers power users to expand their desktop environment for improved multitasking, creative workflows, and visual workspace management. The internal 17.3-inch FHD 144 Hz display delivers smooth visuals and large screen for creative and technical work. Following EUROCOM's user-friendly upgrade philosophy, the Nightsky RX517 offers user-accessible dual SODIMM slots supporting up to 128 GB of DDR5-5600 memory, and three physical M.2 NVMe slots enabling up to 24 TB of SSD storage in RAID 0/1/5 configurations. This allows users to upgrade storage and memory capabilities to meet user changing computing requirements without replacing the laptop.

ASUS TUF GPU Mix-Up Creates Rare Radeon-GeForce Hybrid

Sometimes GPU AIBs partners design a cooler and adapt it to more models, typically within a single GPU family or from the same manufacturer, such as AMD or NVIDIA. Today, we are witnessing what might appear to be the first crossover, with an ASUS TUF Gaming Radeon RX 9070 XT OC Edition featuring a GeForce RTX badge. Reddit user Fantastic-Ad8410 encountered persistent display artifacts on his Radeon RX 9070 XT OC Edition and returned it to Micro Center for an exchange. He expected an identical replacement, but upon unboxing at home he discovered a surprising anomaly. The card's backplate clearly reads "AMD Radeon RX 9070 XT" while the fan shroud prominently displays "GeForce RTX 5070 Ti." Both the Radeon RX 9070 XT and the GeForce RTX 5070 Ti employ very similar thermal solutions, with respective TDPs of 304 W and 300 W, so ASUS uses the same dual-fan cooler design on both models.

This is not an isolated incident. Approximately one month earlier, another Redditor, Blood-Wolfe, reported that an ASUS TUF Gaming GeForce RTX 5070 Ti OC Edition arrived bearing Radeon branding on its top panel. Given the nearly identical mounting points and the proximity of cooler assembly stations, a single misplaced component can lead to these hybrid graphics cards. A momentary mix-up on the production line allowed parts intended for one GPU to be fitted to another. In practical terms, the mixed-brand card performs exactly like a standard Radeon RX 9070 XT. The PCB, GPU silicon, and memory modules are all genuine AMD components, so gaming benchmarks and performance metrics match official specifications. Yet this unexpected blend of AMD and NVIDIA branding has sparked lively debate online. Whether Fantastic-Ad8410 opts to keep this one-of-a-kind GPU or seek another replacement remains to be seen, but this GPU is now definitely a collectible for some enthusiasts.

ASUS BTF 2.5 Connector on GeForce RTX 5090 Stays Cool Even at 1,900 W Load

ASUS has confirmed that its GeForce RTX 5090 BTF Edition can draw nearly 1,900 W without any signs of overheating, thanks to a robust metal power connector that holds up under pressure. In a series of controlled stress tests led by Tony Yu, General Manager of ASUS China, the BTF v2.5 proprietary GC-HPWR connector remained well within safe temperature limits, outperforming the conventional plastic 16-pin 12VHPWR/12V-2x6 alternative. The latest Back-to-the-Future update introduces a detachable GC-HPWR adapter, giving users the flexibility to switch between ASUS's metal connector and the standard 16-pin plug. ASUS supplies a small extraction tool that makes it easy to swap connectors, overcoming the original design's limitation of a permanently attached GC-HPWR adapter and ensuring wider compatibility with existing motherboards, as well as non-BTF-supported boards.

In the first trial, the card drew approximately 670 W, roughly matching the RTX 5090's typical maximum, and the GC-HPWR connector stabilized between 30°C and 35°C over ten minutes. ASUS engineers rated the metal connector for up to 1,000 W continuous operation, and the results confirmed a comfortable safety margin for maximum load. Tony Yu even increased the draw to 1,300 W, yielding a peak connector temperature of 38°C. In a final extreme test, the load was set to 150 A, driving total consumption above 1,900 W. Even under these conditions, the metal connector held at about 41°C, while the power cables reached roughly 70°C, demonstrating the adapter's superior thermal performance. Yu's experiments also showed that the GC-HPWR and 16-pin connectors can share the power load. At a 200 A setting, each connector supplied around 1,200 W to 1,400 W from separate power lines. Although ASUS plans to ship retail RTX 5090 cards with a single connector, this test suggests a possible path for future ultra-high-power GPUs designed for extreme overclockers.

GPU IPC Showdown: NVIDIA Blackwell vs Ada Lovelace; AMD RDNA 4 vs RDNA 3

Instructions per clock is a metric used to define and compare CPU architecture performance usually. However, enthusiast colleagues at ComputerBase had an idea to test the IPC improvement in GPUs, comparing it across current and past generations. NVIDIA's Blackwell-based GeForce RTX 50 series faces off against the Ada Lovelace-based RTX 40 generation, while AMD's RDNA 4-powered Radeon RX 9000 lineup challenges the RDNA 3-based RX 7000 series. For NVIDIA, the test used RTX 5070 Ti and 4070 Ti SUPER, aligning ALU counts and clock speeds and treating memory bandwidth differences as negligible. For AMD, the test matched the RX 9060 XT to the RX 7600 XT, both featuring identical ALUs and GDDR6 memory. By closely matching shader counts and normalizing for clock variations, ComputerBase isolates IPC improvements from other hardware enhancements. In rasterized rendering tests across 19 popular titles, NVIDIA's Blackwell architecture delivered an average IPC advantage of just 1% over the older Ada Lovelace.

This difference could easily be attributed to normal benchmark variance. Ray tracing and path tracing benchmarks showed no significant IPC uplift, leaving the latest generation essentially on par with its predecessor when normalized for clock and unit count. AMD's RDNA 4, by contrast, exhibited a substantial IPC leap. Rasterized performance improved by around 20% compared to RDNA 3, while ray-traced workloads enjoyed a roughly 31% gain. Path tracing results were even more extreme, with RDNA 4 delivering nearly twice the FPS, a 100% increase over its predecessor. These findings suggest that NVIDIA's performance improvements primarily stem from higher clock speeds, increased execution unit counts, and enhanced features. AMD's RDNA 4 represents a significant architectural advance, marking its most notable IPC gain since the original RDNA launch.

Researchers Unveils Real-Time GPU-Only Pipeline for Fully Procedural Trees

A research team from Coburg University of Applied Sciences and Arts in Germany, alongside AMD Germany, introduced a game-changing approach to procedural tree creation that runs entirely on the GPU, delivering both speed and flexibility, unlike anything we've seen before. Showcased at High-Performance Graphics 2025 in Copenhagen, the new pipeline utilizes DirectX 12 work graphs and mesh nodes to construct detailed tree models on the fly, without any CPU muscle. Artists and developers can tweak more than 150 parameters, everything from seasonal leaf color shifts and branch pruning styles to complex animations and automatic level-of-detail adjustments, all in real-time. When tested on an AMD Radeon RX 7900 XTX, the system generated and pushed unique tree geometries into the geometry buffer in just over three milliseconds. It then automatically tunes detail levels to maintain a target frame rate, effortlessly demonstrating stable 120 FPS under heavy workloads.

Wind effects and environmental interactions update seamlessly, and the CPU's only job is to fill a small set of constants (camera matrices, timestamps, and so on) before dispatching a single work graph. There's no need for continuous host-device chatter or asset streaming, which simplifies integration into existing engines. Perhaps the most eye-opening result is how little memory the transient data consumes. A traditional buffer-heavy approach might need tens of GB, but researcher's demo holds onto just 51 KB of persistent state per frame—a mind-boggling 99.9999% reduction compared to conventional methods. A scratch buffer of up to 1.5 GB is allocated for work-graph execution, though actual usage varies by GPU driver and can be released or reused afterward. Static assets, such as meshes and textures, remain unaffected, leaving future opportunities for neural compression or procedural texturing to further enhance memory savings.

Unreal Engine 5.6 Delivers Up to 35% Performance Improvement Over v5.4

Thanks to a new comparison video from the YouTube channel MxBenchmarkPC, the Paris Tech Demo by Scans Factory is put through its paces on an RTX 5080, running side by side in Unreal Engine 5.6 and version 5.4 with hardware Lumen enabled. That way, we get to see what Epic Games has done with the hardware optimization in the latest release. In GPU‑limited scenarios, the upgrade is immediately clear, with frame rates jumping by as much as 25% thanks to better utilization of graphics resources, even if that means the card draws a bit more power to deliver the boost. When the CPU becomes the bottleneck, Unreal Engine 5.6 really pulls ahead, smoothing out frame-time spikes and delivering up to 35% higher throughput compared to the older build. Beyond the raw numbers, the new version also refines Lumen's visuals. Lighting feels more accurate, and reflections appear crisper while maintaining the same level of shadow and ambient occlusion detail that developers expect.

Unreal Engine 5.6 was officially launched earlier this month, just after Epic Games wrapped its Unreal Fest keynote, where it teased many of these improvements. Hardware-accelerated ray tracing enhancements now shift more of the Lumen global illumination workload onto modern GPUs, and a Fast Geometry Streaming plugin makes loading vast, static worlds feel seamless and stutter-free. Animators will appreciate the revamped motion trails interface, which speeds up keyframe adjustments, and new device profiles automatically tune settings to hit target frame rates on consoles and high‑end PCs. To showcase what's possible, Epic teamed up with CD Projekt Red for a The Witcher IV tech demo that runs at a steady 60 FPS with ray tracing fully enabled on the current-gen PlayStation 5 console. If you're curious to dive in, you can download Unreal Engine 5.6 Paris - Fontaine Saint-Michel Tech Demo today and explore it for yourself on your PC.

AMD's Next-Gen UDNA-Based Radeon GPUs to Use 80 Gbit/s HDMI 2.2

Just as the RDNA 4 rollout cycle is completing, the rumor mill for the next generation of Radeon GPUs is intensifying. According to @Kepler_L2 on X, a reliable leaker, AMD is equipping its next-generation UDNA-based GPUs with the HDMI 2.2 display protocol, capable of supporting 64 and 80 Gbit/s, meaning that the highest-end bandwidth of 96 Gbit/s (Ultra96) is not expected to arrive until the next generation. The HDMI Forum, the consortium behind the HDMI standard, officially released the HDMI 2.2 specifications in January during CES. The HDMI forum also foresees HDMI 2.2 cables hitting the shelves sometime in Q3 or Q4 of 2025, which means the rollout is near. However, AMD's next-generation GPUs will not utilize its entire feature spectrum in the coming GFX13 GPU architecture.

The HDMI 2.2 update introduces a future-proof Fixed Rate Link architecture that delivers up to 96 Gbps of bandwidth, enabling uncompressed 4K at 240 Hz or 8K at 60 Hz with full 4:4:4 chroma, and paving the way for even higher resolutions, such as 10K and 12K at 120 Hz. It accommodates both uncompressed and visually lossless compressed formats, as well as advanced chroma sampling options. This massive data throughput meets the growing demands of immersive audiovisual experiences from next-generation gaming and augmented and virtual reality to professional light-field displays, large-format digital signage, medical imaging, and machine-vision systems by doubling payload capacity every two to three years. The Ultra96 HDMI cable is designed to handle all HDMI 2.2 features. This cable is accompanied by the new Latency Indication Protocol, which ensures precise lip-sync and audio-video alignment across complex multi-hop setups involving receivers, soundbars, and other downstream devices.

Alphacool Unveils New Eisblock Aurora Arc B580 Steel Legend/Challenger GPU Water Block

Alphacool International GmbH, based in Braunschweig, has been a pioneer in PC water cooling technology for over 20 years. With one of the most comprehensive product portfolios in the industry and over 20 years of experience, Alphacool is now expanding its portfolio with the new Eisblock Aurora Arc B580 Steel Legend + Challenger with Backplate.

The GPU water cooler is precisely tailored to fit the layout of ASRock's Intel Arc B580 Steel Legend and Challenger graphics cards. An updated fin structure, optimized water flow, and an improved jetplate ensure high cooling performance. The chrome-plated copper base offers excellent thermal conductivity and long-lasting durability. Subtle RGB lighting complements the elegant design and provides even illumination.

Steam Adds In‑Game Performance Monitor Overlay with Expanded Metrics

Valve has rolled out a significant upgrade to its in-game performance tools with the June 17 Beta client update. Instead of a simple FPS counter, Steam now offers a full Performance Monitor that tracks frame rate alongside CPU and GPU utilization, clock speeds, temperatures, and memory usage. Players can view real-time graphs for each metric or opt for a pared-down display showing only FPS. The overlay also flags when frame-generation features like DLSS or FSR are active, clearly separating true rendered frames from those created by upscaling technology. This clarity helps gamers understand whether a smooth experience results from extra generated frames or genuine improvements in rendering.

Competitive and detail-focused users will appreciate knowing both the true game-frame counts and upscaled FPS so they can fine-tune settings based on actual performance. If the monitor shows full GPU memory, reducing texture quality becomes an obvious fix, and if CPU usage is maxed out, dialing back physics or draw distance may be the answer. Currently, the Performance Monitor is only available to Steam Beta participants. Valve plans to roll out additional metrics over time and notes that not every feature will be compatible with every system from the start. Anyone curious to try the new tools should switch to the beta client and explore the updated overlay options. Once these features reach full release, millions of PC gamers will have powerful diagnostics at their fingertips, making it easier than ever to balance visual quality with smooth performance.

ASUS RTX 5090 Dhahab Edition Raises $24,200 in Charity Auction

ASUS has recently held a charity auction with special edition ROG Astral GeForce RTX 5090 Dhahab Edition graphics card signed by NVIDIA's CEO, Jensen Huang. ASUS managed to raise $24,200, which will go to the TZU CHI USA Foundation, a volunteer-based non-governmental organization providing charitable assistance, medical aid, disaster relief, and educational services, and environmental protection programs to those in need.

The auction period lasted from June 4 to June 13, and showing how tight the race was, the three highest bidders were just $10 apart. As the ROG Astral GeForce RTX 5090 Dhahab OC Edition is quite a rare graphics card and sold exclusively in the Middle East, the one signed by Jensen Huang is quite a catch. Hopefully, it won't end up on eBay.

ASUS Republic of Gamers Unveils ROG Astral GeForce RTX 5080 Dhahab CORE OC Edition

ASUS Republic of Gamers (ROG) today announced the ROG Astral GeForce RTX 5080 Dhahab CORE OC Edition graphics card, built to take style and performance to new frontiers. With the latest NVIDIA GPU architecture, cutting-edge thermal design and a premium aesthetic, the ROG Astral GeForce RTX 5080 Dhahab CORE OC is built for gamers who want a PC that plays well and looks incredible doing it.

The gold standard of GeForce RTX 5080 performance
The ROG Astral GeForce RTX 5080 Dhahab CORE OC Edition graphics card stands ready to let users reap the benefits of the new Blackwell architecture at the heart of the NVIDIA GeForce RTX 50 Series. This delivers fourth-generation ray tracing cores for incredible performance. Users also get NVIDIA DLSS 4 Super Resolution, Multi-Frame Generation and Ray Reconstruction, which help games run smoothly with graphics cranked up.

Lossless Scaling 3.1 Update Brings 2x Performance Increase and More Visual Fidelity

The developer of the Lossless Scaling utility has released version 3.1, introducing both visual enhancements and significant performance improvements, up to 2x in "Performance Mode" depending on hardware and settings. Although there may be a slight dip in image fidelity, users with older graphics cards or integrated solutions will appreciate the ability to boost frame rates without lowering resolution. In addition to this new mode, the v3.1 update improves timestamp filtering for sharper details during rapid motion and refines border handling to eliminate jagged edges. Ghosting and object flicker have been reduced, and smarter interface detection keeps menus and heads‑up displays stable under scaling. Frame generation is powered by LSFG, while scaling options include LS1, AMD FidelityFX Super Resolution, NVIDIA Image Scaling, Integer Scaling, Nearest Neighbor, xBR, Anime4K, Sharp Bilinear and Bicubic CAS.

Version 3.1 also expands localization support to include Finnish, Georgian, Greek, Norwegian, Slovak, and even the constructed language Toki Pona, reflecting the tool's growing international audience. Developed and maintained by a single creator, Lossless Scaling competes directly with proprietary upscalers, such as NVIDIA DLSS and AMD FSR, without requiring dedicated machine-learning hardware and is offered at a one-time price of $6.99. Its built‑in algorithms are tailored to different content types: LS1 or AMD FSR is recommended for modern games, Integer Scaling or xBR works best for pixel‑art titles, and Anime4K excels with cartoons or anime. For optimal results, users should cap their game at a stable frame rate to provide the utility with sufficient resources to operate and run their titles in windowed mode on Windows 10 version 1903 or later.

Intel Arc "Battlemage" B770 GPU Support Lands in Popular AIDA64 Tool

More confirmations regarding the final release of the Intel Arc "Battlemage" B770 GPU have landed, this time with the update of the popular AIDA64 tool. Just days after support for BMG-G31 GPUs, supposedly the SKUs behind the higher-end B770 and B750 models, has landed in the open-source Mesa driver, diagnostic tools are next. In the latest AIDA64 beta version 7.99.7817, FinalWire has added an interesting "GPU information for Intel Battlemage (BMG-G31)" section as a feature update. This means that the tool can now officially recognize Intel's upcoming GPUs and allow users to perform diagnostics. Additionally, the tool also supports the now finalized PCI Express 7.0 controllers and devices, as PCI-SIG has ratified the final specifications of the PCIe Gen 7 standard.

With this confirmation, higher-end Intel Arc B770 and B750 GPUs are getting more credibility for an actual release. We expect to hear more about it in the coming weeks as the rumored Q4 launch nears. Earlier rumors suggest that Intel will pair 32 Xe2 cores for the B770 model with 16 GB of GDDR6 memory on a 256-bit bus. Will Intel satisfy the needs of Arc graphics gamers who have been waiting for a higher-end card remains to be seen. Drop your expectations in the comments, and let us know.

NVIDIA RTX 50 Series GPUs at MSRP in the Most Unexpected Place: US Navy

The US Navy Exchange (NEX) store has become a surprising platform for acquiring NVIDIA's RTX 50-series graphics cards at their manufacturer's suggested retail prices. A Reddit user, known as Accomplished-Feed123, shared that by combining various store promotions and credit card rewards, they managed to purchase a GeForce RTX 5090 Founders Edition for just $1,900, which is significantly below the typical retail price. Savvy shoppers have long discovered open‑box electronics and gaming hardware bargains there. On this occasion, the Reddit user noticed several "largeish brown boxes" hidden behind a locked display that usually houses Apple products.

Those boxes contained multiple RTX 5070 and RTX 5080 cards, along with a single RTX 5090, all priced at their suggested MSRPs of $550, $999, and $1,999, respectively. A quick online search of the part numbers confirmed that the top‑end card was indeed the Founders Edition model. After applying applicable discounts and card rewards, Accomplished‑Feed123 walked away, paying only $1,900 out of pocket. Access to NEX is restricted to active and retired military members and their families, operating under the motto "You Serve, You Save." However, many consumers may know someone eligible for these benefits. Other branches of the US armed forces maintain similar exchange stores, though GPU availability and pricing may differ by location.

Intel Arc "Battlemage" BMG-G31 B770 GPU Support Lands in Mesa Driver

Intel has quietly submitted its patches for BMG-G31 GPU SKUs in the Mesa open-source graphics driver library. With IDs e220, e221, e222, and e223, Intel is gearing up the launch of its higher-end "Battlemage" B770. In the weeks leading up to Computex 2025, Intel dropped hints and unofficial leaks about new Arc Xe2 "Battlemage" desktop graphics cards, including rumors of a high-end B770 model and placeholder mentions of a B750 on Intel Japan's website. Fans were excited, but at the Taipei Nangang show, neither card appeared. Then Tweakers.net reported, based on unnamed insiders, that the Battlemage-based Arc B770 is real and expected to launch in November 2025, though plans could still change.

With 32 Xe2 cores for the B770, Intel plans to pair 16 GB of GDDR6 memory on a 256-bit bus. What is interesting is that Intel will use a PCIe 5.0 ×16 host bus, whereas the lower-end Arc B580 and Arc B570 use a PCIe 4.0 ×8 host bus. A faster PCIe standard is likely to follow as the higher-end Arc B770 yields significantly more compute bandwidth, so we will have to wait and see what Intel has prepared. If the rumored Q4 launch manifests, it will give gamers an interesting choice right around the holidays.
Return to Keyword Browsing
Jul 5th, 2025 17:42 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts