News Posts matching #NVIDIA

Return to Keyword Browsing

New Monster Hunter Wilds Patch Lands To Address PC Performance Issues

Monster Hunter Wilds players on PC have recently had a bit of a rough time of things when it comes to performance, with the game's recent Steam reviews seeing a number of players complaining about random stutters, FPS drops, and a general lack of optimization. Even gamers playing on relatively high-end recent GPUs, like the AMD Radeon RX 7900 XT report that their experience has been tainted by poor performance, even if the actual game content is good. With the latest Monster Hunter Wilds patch, though, Capcom is attempting to address those performance issues—and the resulting slew of negative reviews—as announced today in a post on X.

While there are new monsters, weapons, cosmetics, equipment, and other in-game content, the majority of the focus of the 1.020.00.00 update (full notes here) is on those performance updates. Capcom has changed the way shader compilation works, now making the CPU-intensive task take place the first time you run the game after an update as well updated a slew of upscaler and frame generation changes, primarily adding DLSS 4 and FSR 4 support for GPUs newer than the GeForce RTX 2000 series and AMD Radeon RX 9000 series. The new fix also allows players to mix upscaling and frame generation methods, which should allow players to better tune the game's visuals and performance. Additional fixes to the game include reduced VRAM usage from texture streaming and a more accurate calculation of estimated VRAM consumption. Steam users also now get a notification upon launching Monster Hunter Wilds if they are running an unsupported operating system or out-of-date GPU drivers, or if they are running the game in compatibility mode.

NVIDIA's Dominance Challenged as Largest AI Lab Adopts Google TPUs

NVIDIA's AI hardware dominance is challenged as the world's leading AI lab, OpenAI, taps into Google TPU hardware, showing significant efforts to move away from single-vendor solutions. In June 2025, OpenAI began leasing Google Cloud's Tensor Processing Units to handle ChatGPT's increasing inference workload. This is the first time OpenAI has relied on non-NVIDIA chips in large-scale production. Until recently, NVIDIA GPUs powered both model training and inference for OpenAI's products. Training large language models on those cards remains costly, but it was a periodic process. Inference, by contrast, runs continuously and carries its own substantial expense. ChatGPT now serves more than 100 million daily active users, including 25 million paid subscribers. Inference operations account for nearly half of OpenAI's estimated $40 billion annual compute budget. Google's TPUs, like v6e "Trillium" provide a more cost-effective solution for steady-state inference, as they are designed specifically for high throughput and low latency.

Beyond cost savings, this decision reflects OpenAI's desire to reduce reliance on any single vendor. Microsoft Azure has been its primary cloud provider since early investments and collaborations. However, GPU supply shortages and price fluctuations exposed a weakness in relying too heavily on a single source. By adding Google Cloud to its infrastructure mix, OpenAI gains greater flexibility, avoids vendor lock-in, and can scale more smoothly during usage peaks. For Google, winning OpenAI as a TPU customer offers strong validation for its in-house chip development. TPUs were once reserved almost exclusively for internal projects such as powering the Gemini model. Now, they are attracting leading organizations like Apple and Anthropic. Note that, beyond v6e inference, Google also designs TPUs for training (yet-to-be-announced v6p), which means companies can scale their entire training runs on Google's infrastructure on demand.

NVIDIA GeForce RTX 5080 SUPER Could Feature 24 GB Memory, Increased Power Limits

Hot on the heels of rumored specs of the GeForce RTX 5070 SUPER and RTX 5070 Ti SUPER, rumored specs of the RTX 5080 SUPER emerged on VideoCardz. Apparently, NVIDIA will not tap into the larger "GB202" silicon to build the RTX 5080 SUPER despite maxing out the "GB203" silicon to build the current RTX 5080. It will take a slightly different approach, by giving the card additional memory and power limits. 24 Gbit GDDR7 memory chips are a distinct feature of RTX 50-series SUPER graphics cards, and much like the RTX 5070 Ti SUPER, the RTX 5080 SUPER will feature 24 GB of GDDR7 memory across a 256-bit wide memory bus.

NVIDIA could possibly use 30 Gbps memory speeds for this SKU to end up with at least the same kind of bandwidth as the regular RTX 5080, which is 960 GB/s. The other interesting aspect of the RTX 5080 SUPER is expected to be its increased TGP (total graphics power) value of 415 W, a 15% increase over the 360 W TGP of the RTX 5080. This increase in TGP will not just support the higher density memory chips, but also allow NVIDIA to increase GPU clock speeds. This will likely be necessary, given that NVIDIA has no headroom to increase shader counts on "GB203," and will need something to increase performance for games without heavy memory demands.

NVIDIA GeForce RTX 5070 Ti SUPER Planned with 24 GB of GDDR7 Memory

We have already covered that NVIDIA is planning a SUPER refresh of its GeForce RTX 50 series GPUs, with the most recent leak revealing the GeForce RTX 5070 SUPER, which is expected to feature a slight increase in CUDA core count and 18 GB of memory. However, NVIDIA is also overhauling its RTX 5070 Ti with the SUPER treatment, now enabling the RTX 5070 Ti SUPER to feature as much as 24 GB of GDDR7 memory with no change in CUDA core count. According to kopite7kimi, the RTX 5070 Ti SUPER will feature the same 8,960 CUDA core configuration as the GB203-350-A1 SKU, based on the PG147-SKU55 PCB. The only difference here is the VRAM surplus resulting from an additional 8 GB of GDDR7, which totals 24 GB capacity, compared to the 16 GB now on the regular RTX 5070 Ti SKU.

This also changes the TBP of the card, where the RTX 5070 Ti SUPER GPU is now rumored to feature a 350 W TBP, an upgrade from the non-SUPER's RTX 5070 Ti, which has a 300 W power rating. Perhaps the extra power is dedicated now not only to more VRAM (which is also a power consumer) but to higher clock speeds, too. It wouldn't be sufficient to call it a SUPER variant without some raw compute increase, so NVIDIA might be selecting better die bins for the SUPER SKU to enable higher default clock speeds at only a 50 W TBP increase. Nonetheless, we expect to hear more details in the coming weeks, especially since the wave of rumors has started to flood social media and leaks are becoming more frequent.

NVIDIA GeForce RTX 5070 SUPER Possible Specs Emerge: 24 Gbit Memory Takes Centerstage

NVIDIA will tap into 24 Gbit (3 GB) GDDR7 memory chips and increase memory sizes across the board for its RTX 50-series SUPER line of GPUs slated for later this year, if leaked specs of the upcoming GeForce RTX 5070 SUPER are anything to go by. Kopite7kimi, a reliable source with NVIDIA leaks, says that the RTX 5070 SUPER will feature 18 GB of memory—that's six 24 Gbit memory chips across a 192-bit wide GDDR7 memory bus. The card has a SKU designation of "PG147-SKU65" and ASIC code of "GB205-400-A1."

Besides 18 GB of memory, the RTX 5070 SUPER reportedly maxes out the "GB205" silicon it's based on, enabling all 50 SM present. The current RTX 5070 has 48 out of 50 SM enabled for 6,144 CUDA cores, whereas the RTX 5070 SUPER, with its 50 SM, should have 6,400 CUDA cores. The marginal increase in shader count will be bolstered by the 50% increase in memory size, and possible increases in GPU boost speeds. The memory bandwidth, however, remains unchanged. It ticks at the same 28 Gbps as on the RTX 5070, which yields 672 GB/s. The TGP will be increased to 275 W, from 250 W.

Dell Announces the 16 Premium and 14 Premium Laptop Ranges

Today, Dell unveils its new lineup of flagship laptops, now under the Dell Premium name. Powered by the latest Intel Core Ultra 200H series processors, these devices deliver meaningful performance advancements designed for students, creators and entrepreneurs who rely on their PCs to fuel their ambitions and keep pace with what's next.

Staying true to the XPS tradition, the new Dell Premium laptops uphold the signature craftsmanship and innovation that customers know and love - stunning displays, elevated and smooth finishes, monochromatic colors and cutting-edge technology. The new name signals a fresh chapter—one that makes it easier than ever to find the right PC while providing the same exceptional quality, design and performance.

NVIDIA GeForce NOW Gets Five New Titles, Including System Shock 2: 25th Anniversary Remaster

This GFN Thursday rolls out a new reward and games for GeForce NOW members. Whether hunting for hot new releases or rediscovering timeless classics, members can always find more ways to play, games to stream and perks to enjoy. Gamers can score major discounts on the titles they've been eyeing - perfect for streaming in the cloud - during the Steam Summer Sale, running until Thursday, July 10, at 10 a.m. PT.

This week also brings unforgettable adventures to the cloud: We Happy Few and Broken Age are part of the five additions to the GeForce NOW library this week. The fun doesn't stop there. A new in-game reward for Elder Scrolls Online is now available for members to claim. And SteelSeries has launched a new mobile controller that transforms phones into cloud gaming devices with GeForce NOW. Add it to the roster of on-the-go gaming devices - including the recently launched GeForce NOW app on Steam Deck for seamless 4K streaming.

Eurocom Releases User-Upgradeable 17.3" Nightsky RX517 Laptop Featuring NVIDIA RTX 5070

Eurocom introduces the Nightsky RX517, a customizable and user-upgradeable 17.3-inch laptop built for professionals, creators, and power users who need a balance of screen size, flexibility, and long-term scalability. The Nightsky RX517 comes with large 17.3" internal display for these looking for larger size viewing area. At the core of the Nightsky RX517 is Intel Core Ultra 9 275HX processor with 24 cores / 24 threads and 36 MB cache, paired with the NVIDIA GeForce RTX 5070 Blackwell GPU (4608 CUDA cores, 144 Tensor AI cores, and 8 GB GDDR7 memory). This configuration is ideal for productivity workloads, 2D/3D content handling, development, multi-display environments, and smooth day-to-day performance.

Designed to support up to four active displays with NVIDIA Surround View, the Nightsky RX517 empowers power users to expand their desktop environment for improved multitasking, creative workflows, and visual workspace management. The internal 17.3-inch FHD 144 Hz display delivers smooth visuals and large screen for creative and technical work. Following EUROCOM's user-friendly upgrade philosophy, the Nightsky RX517 offers user-accessible dual SODIMM slots supporting up to 128 GB of DDR5-5600 memory, and three physical M.2 NVMe slots enabling up to 24 TB of SSD storage in RAID 0/1/5 configurations. This allows users to upgrade storage and memory capabilities to meet user changing computing requirements without replacing the laptop.

800 RTX Games and Apps Now Available, DLSS 4 Coming To Diablo IV, Eternal Strands, and More

More than 800 games and applications feature RTX technologies, and each week new games integrating NVIDIA DLSS, NVIDIA Reflex, and advanced ray-traced effects are released or announced, delivering the definitive PC experience for GeForce RTX players. This week, Eternal Strands and Strinova are adding native support for DLSS 4 with Multi Frame Generation, further accelerating performance for GeForce RTX 50 Series gamers. Next week, Diablo IV adds DLSS 4 with Multi Frame Generation. And Monster Energy Supercross 25 - The Official Video Game is introducing support for DLSS Super Resolution.

Eternal Strands Adds DLSS 4 With Multi Frame Generation Today
In Eternal Strands, the debut fantasy action-adventure title from Yellow Brick Games, a new independent studio founded by industry veterans, you'll play as Brynn while you take down giant, climbable creatures. Armed with powerful abilities and an arsenal of weapons, cast and combine your magic to face enemies that range from humanoid constructs to towering beasts. Use the environment and temperature to your advantage in battles against a diverse roster of fantastical creatures, like turning a dragon's fiery breath against ice-covered minions. Climb every surface and use arcane skills to create new paths. Explore the world in pursuit of the Enclave's lost mysteries and challenge giant titans on your journey.

Samsung Releases Smart Monitor M9 With AI-Powered QD-OLED Display

Samsung Electronics today announced its latest Smart Monitor lineup, featuring the flagship Smart Monitor M9 (M90SF model) alongside the updated Smart Monitor M8 (M80F model) and M7 (M70F model). With the introduction of QD-OLED technology to the M9 and advanced AI features across the lineup, the new offerings provide a more personalized and connected screen for work and entertainment.

"The Smart Monitor series continues to evolve based on how people work, watch and play," said Hoon Chung, Executive Vice President of the Visual Display (VD) Business at Samsung Electronics. "With the introduction of QD-OLED and AI-powered enhancements, the M9 delivers a more responsive and refined screen experience - all within a single, versatile display."

ASUS Announces its GeForce RTX 5050 Graphics Card Series

ASUS today launched its NVIDIA GeForce RTX 5050 graphics card series. The series consists of four models, the RTX 5050 Prime, RTX 5050 Prime OC Edition, RTX 5050 DUAL, and RTX 5050 DUAL OC Edition. The RTX 5050 Prime/OC is the more premium of the lot, with a large size of 26.8 cm length, 12 cm height, and 2.5-slot (50 mm) thickness. It uses an aluminium fin-stack heatsink that's ventilated by a trio of 70 mm dual ball-bearing Axial-Tech fans. It also offers dual-BIOS. The default BIOS runs the card at 2572 MHz boost, while the OC BIOS runs it at 2602 MHz. The RTX 5050 Prime OC Edition is a slight step up, with its default BIOS running the card at 2677 MHz boost, and its OC BIOS at 2707 MHz—probably the highest we've seen for an RTX 5050 out of the box.

The ASUS RTX 5050 DUAL/OC series features a more simple and compact board design measuring 20.3 cm in length, 12 cm height, and 2-slot (40 mm) thickness. The RTX 5050 DUAL could be priced close, if not at, the NVIDIA MSRP. The card uses the same PCB as the Prime series, and offers dual-BIOS. The base RTX 5050 DUAL offers 2572 MHz boost with its default BIOS, and 2602 MHz boost with its OC BIOS. Meanwhile, the RTX 5050 DUAL OC Edition offers 2647 MHz with its default BIOS, and 2677 MHz with its OC BIOS. Although not announced, we expect ASUS to come up with an RTX 5050-based Low Profile (half height) graphics card (pictured below), based on a similar board design to its RTX 5060 LP.

NVIDIA's DLSS Transformer Exits Beta, Ready for Deployment

NVIDIA's DLSS Transformer officially graduates from beta today and is rolling out in its full, stable form across supported games. Following its debut as part of the DLSS 4 update, which features multi-frame generation and significant improvements in image quality, the technology has proven its worth. By replacing traditional convolutional neural networks with Transformer models, NVIDIA has doubled the model's parameters, boosted compute throughput fourfold, and delivered a 30-50% uplift in ray-traced effects quality, according to internal benchmarks. Even more impressive, each AI-enhanced frame now processes in just 1 ms on an RTX 5090, compared with the 3.25 ms required by DLSS 3.

Under the hood, DLSS 4 can tap into Blackwell-exclusive hardware, from FP8 tensor cores to fused CUDA kernels, and leans on vertical layer fusion and memory optimizations to keep overhead in check even with models twice the size of their CNN predecessors. To fine-tune performance and eliminate glitches such as ghosting, flicker, or blurring, NVIDIA has quietly run this network through a dedicated supercomputer for the past six years, continuously iterating on its findings. The result is a real-time AI upscaling solution that pairs a higher-performance AI architecture with rigorously validated quality.

ASUS TUF GPU Mix-Up Creates Rare Radeon-GeForce Hybrid

Sometimes GPU AIBs partners design a cooler and adapt it to more models, typically within a single GPU family or from the same manufacturer, such as AMD or NVIDIA. Today, we are witnessing what might appear to be the first crossover, with an ASUS TUF Gaming Radeon RX 9070 XT OC Edition featuring a GeForce RTX badge. Reddit user Fantastic-Ad8410 encountered persistent display artifacts on his Radeon RX 9070 XT OC Edition and returned it to Micro Center for an exchange. He expected an identical replacement, but upon unboxing at home he discovered a surprising anomaly. The card's backplate clearly reads "AMD Radeon RX 9070 XT" while the fan shroud prominently displays "GeForce RTX 5070 Ti." Both the Radeon RX 9070 XT and the GeForce RTX 5070 Ti employ very similar thermal solutions, with respective TDPs of 304 W and 300 W, so ASUS uses the same dual-fan cooler design on both models.

This is not an isolated incident. Approximately one month earlier, another Redditor, Blood-Wolfe, reported that an ASUS TUF Gaming GeForce RTX 5070 Ti OC Edition arrived bearing Radeon branding on its top panel. Given the nearly identical mounting points and the proximity of cooler assembly stations, a single misplaced component can lead to these hybrid graphics cards. A momentary mix-up on the production line allowed parts intended for one GPU to be fitted to another. In practical terms, the mixed-brand card performs exactly like a standard Radeon RX 9070 XT. The PCB, GPU silicon, and memory modules are all genuine AMD components, so gaming benchmarks and performance metrics match official specifications. Yet this unexpected blend of AMD and NVIDIA branding has sparked lively debate online. Whether Fantastic-Ad8410 opts to keep this one-of-a-kind GPU or seek another replacement remains to be seen, but this GPU is now definitely a collectible for some enthusiasts.

MSI Intros GeForce RTX 5050 Shadow 2X OC Graphics Card

MSI introduced the GeForce RTX 5050 Shadow 2X OC graphics card. At the time of this writing, this is MSI's only custom RTX 5050 graphics card model. The card is a premium factory-overclocked product, and is likely to be priced above the $249 NVIDIA MSRP for the RTX 5050. It is 19.6 cm in length, and 12 cm in height, while being strictly 2 slots thick. The card offers factory-overclocked speeds of 2602 MHz boost compared to 2570 MHz reference. The RTX 5050 is cooled by an aluminium fin-stack heatsink that appears to use a single nickel-plated copper heat pipe bent in an S-shape.

The heatsink is ventilated by a pair of premium MSI TorX 5.0 fans. This fan is used by MSI in many of its premium custom-design cards, it features a partially webbed impeller that's designed to maximize axial airflow. The card draws power from a single 8-pin PCIe power connector. Display outputs include three DisplayPort 2.1b and one HDMI 2.1b. Based on the 5 nm "GB207" silicon, the RTX 5050 features 2,560 CUDA cores across 20 SM, 80 Tensor cores, 20 RT cores, and 8 GB of 20 Gbps GDDR6 memory across a 128-bit wide memory interface (320 GB/s memory bandwidth). The company didn't reveal pricing.

Gigabyte Launches Trio of GeForce RTX 5050 Graphics Cards, Including Low-Profile Model

Alongside everyone else, Gigabyte Launched three graphics cards based on the GeForce RTX 5050 GPU today, with the GeForce RTX 5050 GAMING OC 8G topping the line-up, followed by the GeForce RTX 5050 WINDFORCE OC 8G and the GeForce RTX 5050 OC Low Profile 8G. The Gaming OC comes with a larger heatsink and three fans, as well as a gimmicky sliding side plate that reveals the "game on" slogan on the card. The Windforce OC gets to make do with a smaller heatsink and two fans, although both cards are equipped with Gigabyte's hawk fans, Both cards also sport 8 GB of 20 Gbps GDDR6 memory and a pair of DP 2.1b and HDMI 2.1b outputs, as well as a single 8-pin power connector.

The OC Low Profile on the other hand is quite a different beast, largely due to its low-profile design and although it sports the same memory as the larger cards, it only gets one DP 2.1b output, as the second DP output is of the older 1.4b flavour, which is most likely related to it not being mounted straight to the PCB, but there's also a pair of HDMI 2.1b ports on this card. The card obviously has a smaller heatsink, but Gigabyte has still kitted it out with a copper contact plate for the GPU and a copper heatpipe. The smaller fans are said to be lubed with a graphene nano lubricant that is meant to extend the fan life by 2.1 times. The card also sports an 8-pin power connector and Gigabyte supplies a low-profile bracket.

Inno3D Launches the GeForce RTX 5050 Series

INNO3D, a leading manufacturer of high-end multimedia components and innovations is proud to unveil its latest additions to the GeForce RTX 50 Series lineup: the INNO3D GeForce RTX 5050 GPUs. Powered by NVIDIA's highly efficient Blackwell architecture, these state-of-the-art graphics cards deliver exceptional performance and a suite of industry-leading features.

INNO3D has enhanced both the design and performance of its standout cooler series, featuring the sleek TWIN X2 and TWIN X2 OC models. Meanwhile, the RTX 5050 receives a small form factor upgrade with the single-fan COMPACT edition. These strategic enhancements underscore INNO3D's dedication to delivering extraordinary visual experiences, continuing to push the limits of graphics innovation for gamers and creators alike.

NVIDIA Launches GeForce RTX 5050 for Desktops and Laptops, Starts at $249

NVIDIA today formally launched the GeForce RTX 5050 mid-range gaming GPU for desktops and laptops. The desktop GeForce RTX 5050 graphics card starts at $249, while notebooks with RTX 5050 discrete GPUs should start at around $999. Availability of both are slated for "the second half of July 2025", although NVIDIA did not specify a date. The RTX 5050 is designed for 1080p AAA gaming with medium-thru-high settings. The card offers DLSS 4 with Multi Frame Generation, which should unlock higher performance, letting you dial up the eye-candy.

The desktop GeForce RTX 5050 debuts the "GB207" silicon, which it maxes out, enabling all 20 SM present on the chip, for 2,560 CUDA cores, 80 Tensor cores, 20 RT cores, 80 TMUs, and an unspecified ROP count. The GPU is clocked at 2.31 GHz (base) with 2.57 GHz boost. The memory, on the other hand is 8 GB in size, and uses older-generation GDDR6 memory type, across a 128-bit wide memory bus. The company didn't specify memory speed. Other features include a 130 W TGP, which makes it possible for AIC partners to build cards with 6-pin PCIe power connectors, although we expect most cards to feature 8-pin PCIe. The card comes with the latest set of NVENC and NVDEC video accelerators, and the latest display engine.

NVIDIA GeForce RTX 5050 to Launch on July 1?

NVIDIA is rumored to have advanced the launch date of its GeForce RTX 5050 mid-range GPU to July 1, 2025. It was earlier reported that the RTX 5050 would launch by the end of July. MEGAsizeGPU, a reliable source with NVIDIA leaks, says that NVIDIA informed AICs of the advancement, and that no cards will be ready to ship on that date. We are not quite sure what NVIDIA's strategy with the RTX 5050 is. The RTX 5060 launch was eclipsed by Computex, back in May. There are no major press events planned in July. One idea would be to announce the card and have AICs make it available to purchase as soon as they can, so the company could cover some ground over the mid-Summer.

The GeForce RTX 5050 is rumored to be based on the "GB207" silicon, the company's smallest chip implementing the "Blackwell" graphics architecture. This chip is rumored to physically has 20 SM, which the RTX 5050 maxes out, for 2,560 CUDA cores, 80 Tensor cores, 20 RT cores, 80 TMUs, and an unknown number of ROPs. It's expected to offer 8 GB of GDDR6 memory (either 18 Gbps or 20 Gbps), across a 128-bit wide memory bus. NVIDIA is looking to target a sub-$250 price-point with the RTX 5050, to compete with the Intel Arc B580.

Update 14:44 UTC: Turns out the RTX 5050 was announced today, with availability "in the second half of July".

NVIDIA to Launch China-specific GeForce RTX 5090DD in August

NVIDIA is preparing to launch the China-specific GeForce RTX 5090DD graphics card in August 2025, Benchlife reports. The SKU came to light last week bearing internal codename "PG145-SKU40," and being based on the ASIC "GB202-240-K#-A1." It has the same shader count as the regular RTX 5090, with 21,760 CUDA cores across 170 SM, but with a truncated memory sub-system, with 24 GB of 28 Gbps GDDR7 across a 384-bit wide memory interface, for 1,344 GB/s of memory bandwidth. The card's TGP (total graphics power) remains unchanged from the regular RTX 5090, at 575 W. The idea behind GeForce RTX 5090DD is to comply with U.S. export-controls on GPUs with dual-use as AI accelerators.

GPU IPC Showdown: NVIDIA Blackwell vs Ada Lovelace; AMD RDNA 4 vs RDNA 3

Instructions per clock is a metric used to define and compare CPU architecture performance usually. However, enthusiast colleagues at ComputerBase had an idea to test the IPC improvement in GPUs, comparing it across current and past generations. NVIDIA's Blackwell-based GeForce RTX 50 series faces off against the Ada Lovelace-based RTX 40 generation, while AMD's RDNA 4-powered Radeon RX 9000 lineup challenges the RDNA 3-based RX 7000 series. For NVIDIA, the test used RTX 5070 Ti and 4070 Ti SUPER, aligning ALU counts and clock speeds and treating memory bandwidth differences as negligible. For AMD, the test matched the RX 9060 XT to the RX 7600 XT, both featuring identical ALUs and GDDR6 memory. By closely matching shader counts and normalizing for clock variations, ComputerBase isolates IPC improvements from other hardware enhancements. In rasterized rendering tests across 19 popular titles, NVIDIA's Blackwell architecture delivered an average IPC advantage of just 1% over the older Ada Lovelace.

This difference could easily be attributed to normal benchmark variance. Ray tracing and path tracing benchmarks showed no significant IPC uplift, leaving the latest generation essentially on par with its predecessor when normalized for clock and unit count. AMD's RDNA 4, by contrast, exhibited a substantial IPC leap. Rasterized performance improved by around 20% compared to RDNA 3, while ray-traced workloads enjoyed a roughly 31% gain. Path tracing results were even more extreme, with RDNA 4 delivering nearly twice the FPS, a 100% increase over its predecessor. These findings suggest that NVIDIA's performance improvements primarily stem from higher clock speeds, increased execution unit counts, and enhanced features. AMD's RDNA 4 represents a significant architectural advance, marking its most notable IPC gain since the original RDNA launch.

AMD Research Unveils Real-Time GPU-Only Pipeline for Fully Procedural Trees

An AMD research team has introduced a game-changing approach to procedural tree creation that runs entirely on the GPU, delivering both speed and flexibility, unlike anything we've seen before. Showcased at High-Performance Graphics 2025 in Copenhagen, the new pipeline utilizes DirectX 12 work graphs and mesh nodes to construct detailed tree models on the fly, without any CPU muscle. Artists and developers can tweak more than 150 parameters, everything from seasonal leaf color shifts and branch pruning styles to complex animations and automatic level-of-detail adjustments, all in real-time. When tested on an AMD Radeon RX 7900 XTX, the system generated and pushed unique tree geometries into the geometry buffer in just over three milliseconds. It then automatically tunes detail levels to maintain a target frame rate, effortlessly demonstrating stable 120 FPS under heavy workloads.

Wind effects and environmental interactions update seamlessly, and the CPU's only job is to fill a small set of constants (camera matrices, timestamps, and so on) before dispatching a single work graph. There's no need for continuous host-device chatter or asset streaming, which simplifies integration into existing engines. Perhaps the most eye-opening result is how little memory the transient data consumes. A traditional buffer-heavy approach might need tens of GB, but AMD's demo holds onto just 51 KB of persistent state per frame—a mind-boggling 99.9999% reduction compared to conventional methods. A scratch buffer of up to 1.5 GB is allocated for work-graph execution, though actual usage varies by GPU driver and can be released or reused afterward. Static assets, such as meshes and textures, remain unaffected, leaving future opportunities for neural compression or procedural texturing to further enhance memory savings.

NVIDIA GeForce RTX 5090 Briefly Drops Below MSRP in Europe

The GeForce RTX 5090 flagship graphics card may be a hot commodity in Asia due its dual-use in AI acceleration farms, and in the US it may be saddled with crippling import tariffs, besides unethical retail practices, but gamers in Europe have it good. ComputerBase.de reports that their price-tracker has detected RTX 5090 cards briefly available at prices lower than MSRP—a first for the RTX 5090 anywhere in the world.

The MSI RTX 5090 Ventus 3X OC was launched as an MSRP-priced card listed at USD $1,999. German retailer Mindfactory.de had the card listed at €1,999 including VAT, which would make this lower than the MSRP and street prices of RTX 5090 in the US and Asia. Hours later, the retailer corrected the price to €2,600. The general trend with graphics card pricing seems to be that EEA (European Economic Area) countries currently have the best pricing on graphics cards, due to their lower import tariffs than the US, relatively rare scalper gray-marketing, and lower demand cards like RTX 5090 for use-cases other than gaming or professional graphics.

Humanoid Robots to Assemble NVIDIA's GB300 NVL72 "Blackwell Ultra"

NVIDIA's upcoming GB300 NVL72 "Blackwell Ultra" rack-scale systems are reportedly going to get a humanoid robot assembly, according to sources close to Reuters. As readers are aware, most of the traditional manufacturing processes in silicon manufacturing, PCB manufacturing, and server manufacturing are automated, requiring little to no human intervention. However, rack-scale systems required humans for final assembly up until now. It appears that Foxconn and NVIDIA have made plans to open up the first AI-powered humanoid robot assembly plant in Houston, Texas. The central plan is that, in the coming months as the plant is completed, humanoid robots will take over the final assembly process entirely removing humans from the manufacturing loop.

And this is not a bad thing. Since server assembly typically requires lifting heavy server racks throughout the day, the humanoid robot system will aid humans by doing the hard work, thereby saving workers from excessive labor. Initially, humans will oversee these robots in their operations, with fully autonomous factories expected later on. The human element here will primarily involve inspecting the work. NVIDIA has been laying the groundwork for humanoid robots for some time, as the company has developed NVIDIA Isaac, a comprehensive CUDA-accelerated platform designed for humanoid robots. As models from Agility Robotics, Boston Dynamics, Fourier, Foxlink, Galbot, Mentee Robotics, NEURA Robotics, General Robotics, Skild AI, and XPENG require models that are aware of their surroundings, NVIDIA created Isaac GR00T N1, the world's first open humanoid robot foundation model, available for anyone to use and finetune.

NVIDIA GeForce NOW Gets Borderland Series and More Games

GeForce NOW is throwing open the vault doors to welcome the legendary Borderland series to the cloud. Whether a seasoned Vault Hunter or new to the mayhem of Pandora, prepare to experience the high-octane action and humor that define the series that includes Borderlands Game of the Year Enhanced, Borderlands 2, Borderlands 3 and Borderlands: The Pre-Sequel. Members can explore it all before the highly anticipated Borderlands 4 arrives in the cloud at launch.

In addition, leap into the flames and save the day in the pulse-pounding FBC: Firebreak from Remedy Entertainment on GeForce NOW. It's all part of the 13 new games in the cloud this week, including the latest Genshin Impact update and advanced access for REMATCH. Plus, GeForce NOW's Summer Sale is still in full swing. For a limited time, get 40% off a six-month GeForce NOW Performance membership - perfect for diving into role-playing game favorites like the Borderlands series or any of the 2,200 titles in the platform's cloud gaming library.

NVIDIA's NVLink Fusion Stays Proprietary, Third Parties Can Only Work Around It

To expand its growing influence in the data center market, NVIDIA recently launched the NVLink Fusion program. This initiative enables select partners to integrate their custom-designed chips into the NVIDIA system framework. For example, a partner can connect its custom CPU to an NVIDIA GPU using the 900 GB/s NVLink-C2C interface. However, this collaboration comes with significant restrictions. Any custom chip must be connected to an NVIDIA product, and the company maintains firm control over the critical software that manages these connections. This means partners cannot create truly independent, mix-and-match systems. NVIDIA retains control over the essential communication controller and PHY layers in the NVLink Fusion, which initializes and manages these links. It also mandates a license for third-party hardware to use its NVLink Switch chips.

In response, a growing coalition of tech giants, including AMD, Google, and Intel, is championing an open-standard alternative. Their UALink Consortium released its 1.0 specification in April 2025, defining a public standard for linking up to 1,024 accelerators from any vendor. While NVIDIA currently offers superior raw bandwidth, UALink represents a move toward greater flexibility and cost efficiency. According to reports, eight companies expressed interest in NVLink Fusion. However, the frustration of working with a communication link with limited visibility results in designs that are not always optimal and efficient. NVIDIA sets these standards from the beginning, so any company willing to work with NVLink Fusion is aware of these limitations from the outset. Cloud hyperscalers, such as AWS, GCP, and Azure, have expressed interest among the reported eight customers, so they might swallow this pill and continue working with NVIDIA to gain access to the IP despite the limited information available.
Return to Keyword Browsing
Jun 30th, 2025 17:22 CDT change timezone

New Forum Posts

Popular Reviews

TPU on YouTube

Controversial News Posts