News Posts matching #NVIDIA

Return to Keyword Browsing

NVIDIA DLSS Coming to Skull and Bones, Banishers: Ghosts of New Eden, and Smalland: Survive the Wilds

Last week, DLSS was available at launch in the Stormgate Steam Next Fest demo, and in The Inquisitor. Additionally, DLSS was also available for players of the new Atomic Heart: Trapped In Limbo expansion, and the newly-released Call of Duty: Warzone Season 2. This week, Banishers: Ghosts of New Eden launches with DLSS 3, while Smalland: Survive the Wilds and Skull and Bones launch with DLSS 2.

Banishers: Ghosts of New Eden Launches Today With DLSS 3 & DLAA
Focus Entertainment and DON'T NOD's Banishers: Ghosts of New Eden launches February 13th. In New Eden, 1695, communities of settlers are plagued by a dreadful curse. As Banishers, step into their lives, investigate the source of evil, unravel chilling mysteries, explore diverse landscapes, and interact with unforgettable characters whose fate lies in your hands. Immerse yourself in an intimate narrative Action-RPG, taking you on an exhilarating journey between life, death, love and sacrifices.

NVIDIA GeForce 551.52 WHQL Game Ready Drivers Released

NVIDIA today released the latest version of its GeForce Game Ready software. Version 551.52 WHQL comes with optimization for "Skull and Bones." Among the gaming related bugs fixed with this release are an intermittent micro-stutter noticed with V-Sync enabled; a stuttering issue with "Red Dead Redemption 2" on some Advanced Optimus notebooks; and stability issues seen with "Immortals of Aveum" over extended gameplay sessions. Non-gaming issues fixed with this release include a stutter observed with some web-browsers in certain system configurations.

DOWNLOAD: NVIDIA GeForce 551.52 WHQL

Nintendo Switch 2 Could Retain Backward Compatibility with The First-Gen Console

Reports are circulating online that Nintendo's upcoming successor to the Switch console, tentatively referred to as the "Switch 2," will offer backward compatibility for physical game cards and digital purchases from the current Switch library. While Nintendo has yet to officially announce the new console, speculation points to a potential reveal as early as next month for a 2024 launch. The backward compatibility claims first surfaced last year when Nintendo America President Doug Bowser hinted at supporting continuity between console generations to minimize the sales decline when transitioning hardware. New momentum behind the rumors comes from gaming industry insiders Felipe Lima and PH Brazil, who, during recent podcasts, stated the Switch 2 has backward compatibility functionality already being shared with game developers.

Well-known gaming leakers "NateTheHate" and others have corroborated that testing is underway for playing current Switch games on new hardware. If true, this backward compatibility would be a consumer-friendly move that breaks from Nintendo's past tendencies of forcing clean breaks between console ecosystems. While details remain unconfirmed by Nintendo, multiple credible sources point to the upcoming Switch successor allowing gamers to carry forward both their physical and digital libraries to continue enjoying this generation's releases. If the compatibility remains, the hardware platform could stay in the playing field of the same vendor—NVIDIA—who provided Nintendo with Tegra X1 SoC. The updated version of the SoC could use a fork of NVIDIA's Orin platform based on Ampere GPU with DLSS, but official details are yet to be seen.

NVIDIA Introduces NVIDIA RTX 2000 Ada Generation GPU

Generative AI is driving change across industries—and to take advantage of its benefits, businesses must select the right hardware to power their workflows. The new NVIDIA RTX 2000 Ada Generation GPU delivers the latest AI, graphics and compute technology to compact workstations, offering up to 1.5x the performance of the previous-generation RTX A2000 12 GB in professional workflows. From crafting stunning 3D environments to streamlining complex design reviews to refining industrial designs, the card's capabilities pave the way for an AI-accelerated future, empowering professionals to achieve more without compromising on performance or capabilities. Modern multi-application workflows, such as AI-powered tools, multi-display setups and high-resolution content, put significant demands on GPU memory. With 16 GB of memory in the RTX 2000 Ada, professionals can tap the latest technologies and tools to work faster and better with their data.

Powered by NVIDIA RTX technology, the new GPU delivers impressive realism in graphics with NVIDIA DLSS, delivering ultra-high-quality, photorealistic ray-traced images more than 3x faster than before. In addition, the RTX 2000 Ada enables an immersive experience for enterprise virtual-reality workflows, such as for product design and engineering design reviews. With its blend of performance, versatility and AI capabilities, the RTX 2000 Ada helps professionals across industries achieve efficiencies. Architects and urban planners can use it to accelerate visualization workflows and structural analysis, enhancing design precision. Product designers and engineers using industrial PCs can iterate rapidly on product designs with fast, photorealistic rendering and AI-powered generative design. Content creators can edit high-resolution videos and images seamlessly, and use AI for realistic visual effects and content creation assistance. And in vital embedded applications and edge computing, the RTX 2000 Ada can power real-time data processing for medical devices, optimize manufacturing processes with predictive maintenance and enable AI-driven intelligence in retail environments.

AMD Develops ROCm-based Solution to Run Unmodified NVIDIA's CUDA Binaries on AMD Graphics

AMD has quietly funded an effort over the past two years to enable binary compatibility for NVIDIA CUDA applications on their ROCm stack. This allows CUDA software to run on AMD Radeon GPUs without adapting the source code. The project responsible is ZLUDA, which was initially developed to provide CUDA support on Intel graphics. The developer behind ZLUDA, Andrzej Janik, was contracted by AMD in 2022 to adapt his project for use on Radeon GPUs with HIP/ROCm. He spent two years bringing functional CUDA support to AMD's platform, allowing many real-world CUDA workloads to run without modification. AMD decided not to productize this effort for unknown reasons but did open-source it once funding ended per their agreement. Over at Phoronix, there were several benchmarks testing AMD's ZLUDA implementation over a wide variety of benchmarks.

Benchmarks found that proprietary CUDA renderers and software worked on Radeon GPUs out-of-the-box with the drop-in ZLUDA library replacements. CUDA-optimized Blender 4.0 rendering now runs faster on AMD Radeon GPUs than the native ROCm/HIP port, reducing render times by around 10-20%, depending on the scene. The implementation is surprisingly robust, considering it was a single-developer project. However, there are some limitations—OptiX and PTX assembly codes still need to be fully supported. Overall, though, testing showed very promising results. Over the generic OpenCL runtimes in Geekbench, CUDA-optimized binaries produce up to 75% better results. With the ZLUDA libraries handling API translation, unmodified CUDA binaries can now run directly on top of ROCm and Radeon GPUs. Strangely, the ZLUDA port targets AMD ROCm 5.7, not the newest 6.x versions. Only time will tell if AMD continues investing in this approach to simplify porting of CUDA software. However, the open-sourced project now enables anyone to contribute and help improve compatibility. For a complete review, check out Phoronix tests.

Widespread GeForce RTX 4080 SUPER Card Shortage Reported in North America

NVIDIA's decision to shave off $200 from its GeForce RTX 4080 GPU tier has caused a run on retail since the launch of SUPER variants late last monthVideoCardz has investigated an apparent North American supply shortage. The adjusted $999 base MSRP appears to be an irresistible prospect for discerning US buyers—today's report explains how: "a week after its release, that GeForce RTX 4080 SUPER cards are not available at any major US retailer for online orders." At the time of writing, no $999 models are available to purchase via e-tailers (for delivery)—BestBuy and Micro Center have a smattering of baseline MSRP cards (including the Founders Edition), but for in-store pickup only. Across the pond, AD103 SUPER's supply status is a bit different: "On the other hand, in Europe, the situation appears to be more favorable, with several retailers listing the cards at or near the MSRP of €1109."

The cheapest custom GeForce RTX 4080 SUPER SKU, at $1123, seems to be listed by Amazon.com. Almost all of Newegg's product pages are displaying an "Out of Stock" notice—ZOTAC GAMING's GeForce RTX 4080 SUPER Trinity OC White Edition model is on "back order" for $1049.99, while the only "in stock" option is MSI's GeForce RTX 4080 Super Expert card (at $1149.99). VideoCardz notes that GeForce RTX 4070 SUPER and RTX 4070 TI SUPER models are in plentiful supply, which highlights a big contrast in market conditions for NVIDIA's latest Ada Lovelace families. The report also mentions an ongoing shortage of GeForce RTX 4080 (Non-SUPER) cards, going back weeks prior to the official January 31 rollout: "Similar to the RTX 4090, finding the RTX 4080 at its $1200 price point has proven challenging." Exact sales figures are not available to media outlets—it is unusual to see official metrics presented a week or two after a product's launch—so we will have to wait a little longer to find out whether demand has far outstripped supply in the USA.

Cisco & NVIDIA Announce Easy to Deploy & Manage Secure AI Solutions for Enterprise

This week, Cisco and NVIDIA have announced plans to deliver AI infrastructure solutions for the data center that are easy to deploy and manage, enabling the massive computing power that enterprises need to succeed in the AI era. "AI is fundamentally changing how we work and live, and history has shown that a shift of this magnitude is going to require enterprises to rethink and re-architect their infrastructures," said Chuck Robbins, Chair and CEO, Cisco. "Strengthening our great partnership with NVIDIA is going to arm enterprises with the technology and the expertise they need to build, deploy, manage, and secure AI solutions at scale." Jensen Huang, founder and CEO of NVIDIA said: "Companies everywhere are racing to transform their businesses with generative AI. Working closely with Cisco, we're making it easier than ever for enterprises to obtain the infrastructure they need to benefit from AI, the most powerful technology force of our lifetime."

A Powerful Partnership
Cisco, with its industry-leading expertise in Ethernet networking and extensive partner ecosystem, together with NVIDIA, the inventor of the GPU that fueled the AI boom, share a vision and commitment to help customers navigate the transitions for AI with highly secure Ethernet-based infrastructure. Cisco and NVIDIA have offered a broad range of integrated product solutions over the past several years across Webex collaboration devices and data center compute environments to enable hybrid workforces with flexible workspaces, AI-powered meetings and virtual desktop infrastructure.

ASRock Dives Into Why it Lacks NVIDIA GeForce Graphics Cards; Doesn't Rule Out Making Them in the Future

ASRock, ODM giant Pegatron's retail channel brand, built its reputation over its high cost/performance motherboards, and got into graphics cards rather recently (less than 5 years ago), beginning with AMD Radeon graphics cards, before expanding into Intel Arc GPUs. The company has shown with its high-end AMD Radeon cards that it can design complex custom-design graphics cards with heavy cooling solutions, especially given that AMD Radeon boards tend to have more elaborate power designs than NVIDIA. So then, where are the ASRock GeForce RTX graphics cards? Korean tech publication QuasarZone set to find out from ASRock.

Put simply, ASRock does not rule out making custom design GeForce RTX graphics cards in the future, but says that getting into that market right now, is "challenging." NVIDIA now commands the vast majority of discrete GPU market, and as such most of the top DIY PC retail channel brands (such as ASUS, MSI, GIGABYTE), sell both GeForce and Radeon products. They started making GeForce graphics cards decades ago, and have built market presence over the years. NVIDIA also has a set of board partners that exclusively sell GeForce—such as PNY, Palit-Gainward, Galax-KFA2, and Colorful, which make it all the more tough for ASRock to break in. On the specific question asked by Quasar Zone, here was ASRock's answer (machine translated to English by VideoCardz).

Zephyr x VK Valkyrie GeForce RTX 4080 SUPER Revealed

Zephyr and VK Valkyrie have collaborated on a very high-end custom GeForce RTX 4080 SUPER graphics card model, as revealed in a teaser video posted to the former's Bilibili account. VK Valkyrie is a well regarded DIY brand in the Chinese PC gaming market, while Zephyr is a relatively young manufacturer—their unusual GeForce RTX 3060 Ti Compact ITX design with a pink PCB was introduced last summer. TPU's June 2023 news report is featured prominently within their website's news section—greatly appreciated! The Zephyr x VK Valkyrie GeForce RTX 4080 SUPER will be a limited edition release—the two partners have been working together since last August, but a specific launch date and pricing were not revealed in Zephyr's teaser trailer.

Zephyr has, so far, only released air-cooled custom graphics cards—their upcoming VK Valkyrie collaborative model will mark a debut entry into liquid cooled territory. Their chunky 3-slot design consists of a substantial heatsink covered by an RGB-adorned silver shroud and metallic backplate, with an AIO liquid cooling solution. A 280 mm radiator (with 2 x 140 mm fans) is hooked up to the card via twin white braided tubes. A rear-firing 12VHPWR connector provides an elegant means of semi-concealing your 90-degree power cable, if need be. The promotional video includes benchmark results generated by 3DMARK Speed Way, Time Spy Extreme, and Fire Strike Ultra suites (check the relevant screenshot below). Zephyr claims that their limited edition GeForce RTX 4080 SUPER model was cool enough to not exceed 52 degrees Celsius during a heavy Furmark session. The company recommends that interested parties should check its social media accounts for further announcements. The Zephyr x VK Valkyrie GeForce RTX 4080 SUPER could arrive at some point after the Chinese Spring Celebration.

Update Feb 9th: Valkyrie informed us that, for the moment, this collaboration is specific to the Chinese Market, but they are discussing options internally whether it makes sense to bring the card to the West, too."

NVIDIA GeForce RTX 4070 Ti Drops Down to $699, Matches Radeon RX 7900 XT Price

The NVIDIA GeForce RTX 4070 Ti an now be found for as low as $699, which means it is now selling at the same price as the AMD Radeon RX 7900 XT graphics card. The GeForce RTX 4070 Ti definitely lags behind the Radeon RX 7900 XT, and packs less VRAM (12 GB vs. 20 GB), and the faster GeForce RTX 4070 Ti SUPER is selling for around $100 more. The Radeon RX 7900 XT is around 6 to 11 percent faster, depending on the game and the resolution.

The GeForce RTX 4070 Ti card in question comes from MSI and it is Ventus 2X OC model listed over at Newegg.com for $749.99 with a $50-off promotion code. Bear in mind that this is a dual-fan version from MSI and we are quite sure we'll see similar promotions from other NVIDIA AIC partners.

NVIDIA Releases Hotfix Driver to Fix Stuttering

NVIDIA has released a new GeForce Hotfix Driver Version 551.46 that should fix stuttering issues in some scenarios. According to the release notes, the new hotfix driver fixes micro-stuttering in some games when vertical sync is enabled, as well as stuttering when scrolling in web browser. It also fixes stuttering issues on Advanced Optimus Notebooks when running Red Dead Redemption 2 under Vulkan API, and stability issues in Immortals of Aveum under extended gameplay.

The new GeForce Hotfix Driver Version 551.46 is based on the latest GeForce WHQL driver, version 551.23. You can download the new GeForce Hotfix Driver Version 551.46 over at NVIDIA's support page.

GeForce NOW Celebrates Four Year Anniversary

The GeForce NOW anniversary celebrations continue with more games and a member-exclusive discount on the Logitech G Cloud. Among the six new titles coming to the cloud this week is The Inquisitor from Kalypso Media, which spotlights the GeForce NOW anniversary with a special shout-out. "Congrats to four years of empowering gamers to play anywhere, anytime," said Marco Nier, head of marketing and public relations at Kalypso Media. "We're thrilled to raise a glass to GeForce NOW for their four-year anniversary and commitment to bringing AAA gaming to gamers—here's to many more chapters in this cloud-gaming adventure!" Stream the dark fantasy adventure from Kalypso Media and more newly supported titles today across a variety of GeForce NOW-capable devices, whether at home, on a gaming rig, TV or Mac, or on the go with handheld streaming.

Gadgets Galore
Gone are the days of only being able to play full PC games on a decked-out gaming rig. GeForce NOW is a cloud gaming service accessible on a range of devices, from PCs and Macs to gaming handhelds, thanks to GeForce RTX-powered servers in the cloud. Dive into the cloud streaming experience with the dedicated GeForce NOW app for Windows and macOS. Even on underpowered PCs, gamers can enjoy stunning visuals and buttery-smooth frame rates streaming at up to 240 frames per second or at ultrawide resolutions for Ultimate members, a cloud-gaming first.

NVIDIA CG100 "Grace" Server Processor Benchmarked by Academics

The Barcelona Supercomputing Center (BSC) and the State University of New York (Stony Brook and Buffalo campuses) have pitted NVIDIA's relatively new CG100 "Grace" Superchip against several rival products in a "wide variety of HPC and AI benchmarks." Team Green marketing material has focused mainly on the overall GH200 "Grace Hopper" package—so it is interesting to see technical institutes concentrate on the company's "first true" server processor (ARM-based), rather than the ever popular GPU aspect. The Next Platform's article summarized the chip's internal makeup: "(NVIDIA's) Grace CPU has a relatively high core count and a relatively low thermal footprint, and it has banks of low-power DDR5 (LPDDR5) memory—the kind used in laptops but gussied up with error correction to be server class—of sufficient capacity to be useful for HPC systems, which typically have 256 GB or 512 GB per node these days and sometimes less."

Benchmark results were revealed at last week's HPC Asia 2024 conference (in Nagoya, Japan)—Barcelona Supercomputing Center (BSC) and the State University of New York also uploaded their findings to the ACM Digital Library (link #1 & #2). BSC's MareNostrum 5 system contains an experimental cluster portion—consisting of NVIDIA Grace-Grace and Grace-Hopper superchips. We have heard plenty about the latter (in press releases), but the former is a novel concept—as outlined by The Next Platform: "Put two Grace CPUs together into a Grace-Grace superchip, a tightly coupled package using NVLink chip-to-chip interconnects that provide memory coherence across the LPDDR5 memory banks and that consumes only around 500 watts, and it gets plenty interesting for the HPC crowd. That yields a total of 144 Arm Neoverse "Demeter" V2 cores with the Armv9 architecture, and 1 TB of physical memory with 1.1 TB/sec of peak theoretical bandwidth. For some reason, probably relating to yield on the LPDDR5 memory, only 960 GB of that memory capacity and only 1 TB/sec of that memory bandwidth is actually available."

GIGABYTE Highlights its GPU Server Portfolio Ahead of World AI Festival

The World AI Cannes Festival (WAICF) is set to be the epicenter of artificial intelligence innovation, where the globe's top 200 decision-makers and AI innovators will converge for three days of intense discussions on groundbreaking AI strategies and use-cases. Against the backdrop of this premier event, GIGABYTE has strategically chosen to participate, unveiling its exponential growth in the AI and High-Performance Computing (HPC) market segments.

The AI industry has witnessed unprecedented growth, with Cloud Service Providers (CSP's) and data center operators spearheading supercomputing projects. GIGABYTE's decision to promote its GPU server portfolio with over 70+ models, at WAICF is a testament to the increasing demands from the French market for sovereign AI Cloud solutions. The spotlight will be on GIGABYTE's success stories on enabling GPU Cloud infrastructure, seamlessly powered by NVIDIA GPU technologies, as GIGABYTE aims to engage in meaningful conversations with end-users and firms dependent on GPU computing.

Huawei Reportedly Prioritizing Ascend AI GPU Production

Huawei's Ascend 910B AI GPU is reportedly in high demand in China—we last learned that NVIDIA's latest US sanction-busting H20 "Hopper" model is lined up as a main competitor, allegedly in terms of both pricing and performance. A recent Reuters report proposes that Huawei is reacting to native enterprise market trends by shifting its production priorities—in favor of Ascend product ranges, while demoting their Kirin smartphone chipset family. Generative AI industry experts believe that the likes of Alibaba and Tencent have rejected Team Green's latest batch of re-jigged AI chips (H20, L20 and L2)—tastes have gradually shifted to locally developed alternatives.

Huawei leadership is seemingly keen to seize these growth opportunities—their Ascend 910B is supposedly ideal for workloads "that require low-to-mind inferencing power." Reuters has spoken to three anonymous sources—all with insider knowledge of goings-on at a single facility that manufacturers Ascend AI chips and the Kirin smartphone SoCs. Two of the leakers claim that this unnamed fabrication location is facing many "production quality" challenges, namely output being "hamstrung by a low yield rate." The report claims that Huawei has pivoted by deprioritizing Kirin 9000S (7 nm) production, thus creating a knock-on effect for its premium Mate 60 smartphone range.

AMD Radeon RX 7900 XT Now $100 Cheaper Than GeForce RTX 4070 Ti SUPER

Prices of the AMD Radeon RX 7900 XT graphics card hit new lows, with a Sapphire custom-design card selling for $699 with a coupon discount on Newegg. This puts its price a whole $100 cheaper (12.5% cheaper) than the recently announced NVIDIA GeForce RTX 4070 Ti SUPER. The most interesting part of the story is that the RX 7900 XT is technically from a segment above. Originally launched at $900, the RX 7900 XT is recommended by AMD for 4K Ultra HD gaming with ray tracing; while the RTX 4070 Ti SUPER is officially recommended by NVIDIA for maxed out gaming with ray tracing at 1440p, although throughout our testing, we found the card to be capable of 4K Ultra HD gaming.

The Radeon RX 7900 XT offers about the same performance as the RTX 4070 Ti SUPER, averaging 1% higher than it in our testing, at the 4K Ultra HD resolution. At 1440p, the official stomping ground of the RTX 4070 Ti SUPER, the RX 7900 XT comes out 2% faster. These are, of course pure raster 3D workloads. In our testing with ray tracing enabled, the RTX 4070 Ti SUPER storms past the RX 7900 XT, posting 23% higher performance at 4K Ultra HD, and 21% higher performance at 1440p.

NVIDIA DLSS 3 Comes to Stormgate Demo, Atomic Heart: Trapped In Limbo and More Games

DLSS is available in both Enshrouded and Palworld, 2024's two breakout hits that already boast over 20 million combined players. Now, you can try the demo of Stormgate, one of the next Steam hits. Developed by RTS veterans, a Kickstarter campaign concluded last week with over $2.2 million pledged, making it the top gaming Kickstarter of 2023, and 2024 to date. With such interest, Stormgate is primed to be the year's next hit game, and you can already get the definitive experience in the demo thanks to the inclusion of DLSS 3. Additionally, DLSS 3 is available today in the new Atomic Heart: Trapped In Limbo expansion, and in Call of Duty: Warzone Season 2, which begins tomorrow. And rounding out this bumper crop of releases is The Inquisitor, launching later this week with DLSS 2.

Stormgate Demo Available Now With DLSS 3
Stormgate is the upcoming free-to-play real-time strategy game from the team at Frost Giant Studios, featuring developers who've worked on some of gaming's most popular RTS games, including Warcraft III and StarCraft II. Stormgate's recent crowdfunding campaign was the most successful video game Kickstarter of 2023 and the highest-earning real-time strategy game Kickstarter of all time.

Galax Intros GeForce RTX 3050 6GB Low Profile White Graphics Card

Galax did a rather unique custom-design take on the GeForce RTX 3050 6 GB, NVIDIA's new entry-level GPU. Called simply the RTX 3050 6 GB LP White, the card is an ode to the white color scheme not just for its cooler shroud and fan impellers, but even the PCB itself. That's right, the card features an all-white PCB to go with its aluminium monoblock heatsink that uses an extruded aluminium design, with a pair of 40 mm fans ventilating it.

Another surprising design choice with this card is that it provides four display connectors within its half-height PCB, without a breakout connector. Display connectors include two each of DisplayPort 1.4a and HDMI 2.1 ports. The card sticks to the reference speeds of the RTX 3050 6 GB, especially taking advantage of its 70 W TGP, which means it does away with power connectors, and extends its heatsink across much of the PCB's topside. Despite this, the card offers a tiny factory overclock of 1485 MHz, compared to 1470 MHz reference. The four display connector setup in particular should appeal to the non-gaming crowd that needs a graphics card for multiple displays. The company didn't reveal pricing.

ASUS Launches GeForce RTX 4080 SUPER Noctua OC Edition Graphics Card

ASUS today announced the GeForce RTX 4080 SUPER Noctua OC Edition graphics card, designed for gamers and creators who seek cutting-edge performance that operates at whisper-quiet noise levels. Building on the class-leading design of the ASUS GeForce RTX 3070, RTX 3080, and RTX 4080 Noctua Edition variants, this premium graphics card achieves near-silent operation thanks to the Noctua fans, an ASUS custom-built vapor chamber and a massive heatsink.

The ASUS GeForce RTX 4080 SUPER Noctua Edition incorporates a pair of Noctua's state-of-the-art NF‑A12x25 fans to move air over an extra-large heatsink. These fans were developed in a partnership between the ASUS thermal R&D team and engineers at Noctua. The design involved tuning the fin density and heat pipe arrangement of the heatsink, optimizing each for the airflow characteristics of the NF-A12x25 fans. To prioritize low noise levels, a smooth acoustic signature and high cooling performance while making room for the pair of 120 mm fans, ASUS expanded the card's total volume to occupy 4.3 slots of space. The card also comes with a Dual BIOS switch, with Quiet mode enabled by default to minimize acoustics right out of the box.

NVIDIA Readies RTX TrueHDR, Converts SDR Games to HDR in Real-time using AI

NVIDIA is giving finishing touches to a new feature called RTX TrueHDR. This was discovered by a user on NexusMods, who went ahead and published a mod based on it. This is essentially a driver-level utility that converts SDR games to HDR in real-time by leveraging the generative AI capabilities of GeForce RTX GPUs and their Tensor cores. From the looks of it, it appears to be a derivative of RTX Video HDR enhancement, except it works with the lossless output of a game. A vast selection of gaming monitors these days come with at least DisplayHDR 400 capability and the ability to display HDR content; which gives NVIDIA a sizable market to address with RTX TrueHDR. There's no word on when NVIDIA plans to release this feature, but it could only be a matter of time (weeks, if not months), given that NVIDIA drivers are already capable of SDR-to-HDR conversion technologies.

Mod Unlocks FSR 3 Fluid Motion Frames on Older NVIDIA GeForce RTX 20/30 Series Cards

NVIDIA's latest RTX 40 series graphics cards feature impressive new technologies like DLSS 3 that can significantly enhance performance and image quality in games. However, owners of older 20 and 30 series NVIDIA GeForce RTX cards cannot officially benefit from these cutting-edge advances. DLSS 3's Frame Generation feature, in particular, requires dedicated hardware only found in NVIDIA's brand new Ada Lovelace architecture. But the ingenious modding community has stepped in with a creative workaround solution where NVIDIA has refused to enable frame generation functionality on older generation hardware. A new third-party modification can unofficially activate both upscaling (FSR, DLAA, DLSS or XeSS) and AMD Fluid Motion Frames on older NVIDIA cards equipped with Tensor Cores. Replacing two key DLL files and a small edit to the Windows registry enables the "DLSS 3" option to be activated in games running on older hardware.

In testing conducted by Digital Foundry, this modification delivered up to a 75% FPS boost - on par with the performance uplift official DLSS 3 provides on RTX 40 series cards. Games like Cyberpunk 2077, Spider-Man: Miles Morales, and A Plague Tale: Requiem were used to benchmark performance. However, there can be minor visual flaws, including incorrect UI interpolation or random frame time fluctuations. Ironically, while the FSR 3 tech itself originates from AMD, the mod currently only works on NVIDIA cards. So, while not officially supported, the resourcefulness of the modding community has remarkably managed to bring cutting-edge frame generation to more NVIDIA owners - until AMD RDNA 3 cards can utilize it as well. This shows the incredible potential of community-driven software modification and innovation.

Price Cuts Bring the GeForce RTX 4060 Ti to Within $15 of Radeon RX 7600 XT

A series of price cuts on Best Buy for the GeForce RTX 4060 Ti (8 GB) sees the card now drop to $344, down from its $399 MSRP, reports VideoCardz. This new low price puts it within just $15 of the recently launched AMD Radeon RX 7600 XT. For the vast majority of gamers playing at 1080p, this is great news. In our testing, the RTX 4060 Ti 8 GB is on average 18% faster than the RX 7600 XT in gaming without ray tracing; and a staggering 45% faster with ray tracing enabled. Both the RTX 4060 Ti and the RX 7600 XT are recommended by their makers for maxed out gaming at 1080p, including with ray tracing. Best Buy has the cheapest RTX 4060 Ti in the market right now, with the Gigabyte RTX 4060 Ti Gaming OC listed at $344.

Both the RTX 4060 Ti and the RTX 4060 appear to be designed to withstand a great degree of price cuts, to compete against the RX 7600 XT and RX 7600. The RTX 4060 Ti, much like the RX 7600 XT, features a small ASIC, and just four GDDR6 memory chips for its 128-bit memory bus, a simpler 8-lane PCIe interface; and in our opinion, a simpler VRM design than the RX 7600 XT. The bill of materials would boil down to the ASIC costs; while the RTX 4060 Ti uses a 188 mm² silicon built on the newer 5 nm node; the RX 7600 XT uses a larger 204 mm² albeit based on the older 6 nm node.

Palit Introduces GeForce RTX 3050 6 GB KalmX and StormX Models

Palit Microsystems Ltd., a leading graphics card manufacturer, proudly announces the NVIDIA GeForce RTX 3050 6 GB KalmX and StormX Series graphics cards. The GeForce RTX 3050 6 GB GPU is built with the powerful graphics performance of the NVIDIA Ampere architecture. It offers dedicated 2nd gen RT Cores and 3rd gen Tensor Cores, new streaming multiprocessors, and high-speed G6 memory to tackle the latest games.

GeForce RTX 3050 6 GB KalmX: Passive Cooling. Silent Gaming
Introducing the Palit GeForce RTX 3050 KalmX, where silence meets performance in perfect harmony. The KalmX series, renowned for its ingenious fan-less design, redefines your gaming experience. With its passive cooling system, this graphics card operates silently, making it ideal for both gaming and multimedia applications. Available on shelves today—2nd February 2024.

Aetina Introduces New MXM GPUs Powered by NVIDIA Ada Lovelace for Enhanced AI Capabilities at the Edge

Aetina, a leading global Edge AI solution provider, announces the release of its new embedded MXM GPU series utilizing the NVIDIA Ada Lovelace architecture - MX2000A-VP, MX3500A-SP, and MX5000A-WP. Designed for real-time ray tracing and AI-based neural graphics, this series significantly enhances GPU performance, delivering outstanding gaming and creative, professional graphics, AI, and compute performance. It provides the ultimate AI processing and computing capabilities for applications in smart healthcare, autonomous machines, smart manufacturing, and commercial gaming.

The global GPU (graphics processing unit) market is expected to achieve a 34.4% compound annual growth rate from 2023 to 2028, with advancements in the artificial intelligence (AI) industry being a key driver of this growth. As the trend of AI applications expands from the cloud to edge devices, many businesses are seeking to maximize AI computing performance within minimal devices due to space constraints in deployment environments. Aetina's latest embedded MXM modules - MX2000A-VP, MX3500A-SP, and MX5000A-WP, adopting the NVIDIA Ada Lovelace architecture, not only make significant breakthroughs in performance and energy efficiency but also enhance the performance of ray tracing and AI-based neural graphics. The modules, with their compact design, efficiently save space, thereby opening up more possibilities for edge AI devices.

NVIDIA GeForce RTX 3050 6GB Formally Launched

NVIDIA today formally launched the GeForce RTX 3050 6 GB as its new entry-level discrete GPU. The RTX 3050 6 GB is a significantly different product from the original RTX 3050 that the company launched as a mid-range product way back in January 2022. The RTX 3050 had originally launched on the 8 nm GA106 silicon, with 2,560 CUDA cores, 80 Tensor cores, 20 RT cores, 80 TMUs, and 32 ROPs; with 8 GB of 14 Gbps GDDR6 memory across a 128-bit memory bus; these specs also matched the maximum core-configuration of the smaller GA107 silicon, and so the company launched the RTX 3050 based on GA107 toward the end of 2022, with no change in specs, but a slight improvement in energy efficiency from the switch to the smaller silicon. The new RTX 3060 6 GB is based on the same GA107 silicon, but with significant changes.

To begin with, the most obvious change is memory. The new SKU features 6 GB of 14 Gbps GDDR6, across a narrower 96-bit memory bus, for 168 GB/s of memory bandwidth. That's not all, the GPU is significantly cut down, with just 16 SM instead of the 20 found on the original RTX 3050. This works out to 2,048 CUDA cores, 64 Tensor cores, 16 RT cores, 64 TMUs, and an unchanged 32 ROPs. The GPU comes with lower clock speeds of 1470 MHz boost, compared to 1777 MHz on the original RTX 3050. The silver lining with this SKU is its total graphics power (TGP) of just 70 W, which means that cards can completely do away with power connectors, and rely entirely on PCIe slot power. NVIDIA hasn't listed its own MSRP for this SKU, but last we heard, it was supposed to go for $179, and square off against the likes of the Intel Arc A580.
Return to Keyword Browsing
May 17th, 2024 08:40 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts