News Posts matching #Deep Learning

Return to Keyword Browsing

NVIDIA Releases Game Ready 496.13 WHQL GeForce Graphics Driver, Support Removed for Windows 8.1/8/7 & Kepler

NVIDIA has today launched its 496.13 game-ready WHQL GeForce graphics driver with many improvements and changes. Starting with the naming, the company has jumped from the 472.12 WHQL version released on September 20th to the 496.xx naming released today. Such a significant increase in version naming is uncommon, and makes us wonder why the company decided to do it, probably in preparation for the Windows 11 branch of their drivers, which uses version 500.

Starting from release 496.13, NVIDIA has also removed support for Windows 8.1, Windows 8 and Windows 7. The last driver to support these operating systems is 472.12. This makes some sense, since between this release and today, Microsoft has launched their Windows 11 operating system. NVIDIA also trimmed more fat by removing support for the Kepler architecture, which was launched in 2012 and included models like GeForce GTX 780 Ti, GTX 780, GTX 770, GTX 760, GT 740, GT 730, GTX 690, GTX 680, GTX 670, GTX 660 Ti, GTX 660, GTX 650 Ti and GTX 630.

Update 15:57 UTC: Added confirmation from NVIDIA
Download NVIDIA GeForce Graphics Drivers 496.13 WHQL.

NVIDIA DLSS Gets Ported to 10 Additional Titles, Including the New Back 4 Blood Game

NVIDIA's Deep Learning Super Sampling (DLSS) technology has been one of the main selling points of GeForce RTX graphics cards. With the broad adoption of the technology amongst many popular game titles, the gaming community has enjoyed the AI-powered upscaling technology that boosts frame-rate output and delivers better overall performance. Today, the company announced that DLSS arrived in 10 additional game titles, and those include today's release of Back 4 Blood, Baldur's Gate 3, Chivalry 2, Crysis Remastered Trilogy, Rise of the Tomb Raider, Shadow of the Tomb Raider, Sword and Fairy 7, and Swords of Legends Online.

With so many titles receiving the DLSS update, NVIDIA advertises using the latest GeForce driver to achieve the best possible performance in the listed games. If you are wondering just how much DLSS adds to the performance, in the newest Back 4 Blood title, RTX GPUs see a 46% boost in FPS. Similar performance gains translate to other labels that received the DLSS patch. You can expect to achieve more than double the number of frames in older titles like Alan Wake Remastered, Tomb Raider saga, and FIST.
For more information about performance at 4K resolution, please see the slides supplied by NVIDIA below.

NVIDIA CEO Jensen Huang to Unveil New AI Technologies, Products in GTC Keynote

NVIDIA today announced that it will host a global, virtual GTC from Nov. 8-11, featuring a news-filled keynote by NVIDIA founder and CEO Jensen Huang and talks from some of the world's preeminent AI research and industry leaders. Huang's keynote will be livestreamed on Nov. 9 at 9 a.m. Central European Time/4 p.m. China Standard Time/12 a.m. Pacific Daylight Time, with a rebroadcast at 8 a.m. PDT for viewers in the Americas. Registration is free and is not required to view the keynote.

More than 200,000 developers, innovators, researchers and creators are expected to register for the event, which will focus on deep learning, data science, high performance computing, robotics, data center/networking and graphics. Speakers share the latest breakthroughs that are transforming some of the world's largest industries, such as healthcare, transportation, manufacturing, retail and finance.

NVIDIA Prepares to Deliver Deep Learning Anti-Aliasing Technology for Improved Visuals, Coming first to The Elder Scrolls Online

Some time ago, NVIDIA launched its Deep Learning Super Sampling (DLSS) technology to deliver AI-enhanced upscaling images to your favorite AAA titles. It uses proprietary algorithms developed by NVIDIA and relies on the computational power of Tensor cores found in GeForce graphics cards. In the early days of DLSS, NVIDIA talked about an additional technology called DLSS2X, which was supposed to be based on the same underlying techniques as DLSS, however, just to do image sharpening and not any upscaling. That technology got its official name today: Deep Learning Anti-Aliasing or DLAA shortly.

DLAA uses technology similar to DLSS, and it aims to bring NVIDIA's image-sharpening tech to video games. It aims to use the Tensor cores found in GeForce graphics cards, and provide much better visual quality, without sacrificing the performance, as it runs on dedicated cores. It is said that the technology will be offered alongside DLSS and other additional anti-aliasing technologies in in-game settings. The first game to support it is The Elder Scrolls Online, which now has it in the public beta test server, and will be available to the general public later on.

Red Dead Redemption 2 Update Arrives July 13, Brings NVIDIA DLSS Technology

Rockstar Games, the developer behind plenty of AAA titles, including Red Dead Redemption 2 (RDR2), is preparing an interesting update for all the RDR2 players. Called the "Blood Money" update, it brings new player activities to the Red Dead Online community. As Rockstar describes it, you will "Step into the criminal underworld of Red Dead Online: Blood Money on July 13, and prove you are willing to get your hands dirty in service to notorious and well-connected members of Saint-Denis society." While the update brings interesting content, it also integrates new technologies for visual/graphics improvements. NVIDIA's Deep Learning Super Sampling (DLSS) is also landing on July 13, allowing you to play the game with better performance and image quality if you have a GeForce RTX graphics card.
Check out the Red Dead Online: Blood Money trailer below.

NVIDIA Working on Ultra Quality Mode for DLSS Upscaling

NVIDIA's Deep Learning Super Sampling (DLSS) technology has been developed to upscale lower resolutions using artificial intelligence and deep learning algorithms. By using this technique, users with RTX cards can increase their framerates in supported games, with minimal loss in image quality. Recently, AMD introduced FidelityFX Super Resolution, a competing technology, which in one aspect might be technologically better than the DLSS competition. How you might wonder? Well, at the "quality" setting, NVIDIA's DLSS renders the game at 66.6% of the resolution, upscaling it 1.5 times. At the same "quality" preset, AMD FSR renders the game at 77% of the resolution and upscales the image by 1.3 times. This is technically providing an advantage to AMD FSR technology, as the image is posed to look better with less upscaling. DLSS on the other hand uses much more information, because it considers multiple frames in its temporal algorithm.

That newfound competition could be what made NVIDIA re-think their options and today we are getting some exciting news regarding DLSS. In the Unreal Engine 5 (UE5) documentation, there is a placeholder for "Ultra Quality" DLSS mode, which is supposed to rival AMD's "Ultra Quality" mode and offer the best possible image quality. Currently, the latest DLSS version is 2.2.6.0, which is present in some DLSS supported games, and can be added to others using a DLL-swap. The updated version with the Ultra Quality preset is already present in UE5, called DLSS 2.2.9.0. Mr. Alexander Battaglia from Digital Foundry has made a quick comparison using the two versions, however, we are waiting for more in-depth testing to see the final results.

AAEON Announces ARES-WHI0 Server Board

AAEON, an industry leader of AI and IOT solutions, is excited to celebrate the launch of Intel's latest scalable platform, the 3rd Generation Intel Xeon Scalable Processors (formerly Ice Lake-SP). As an associate member of the Intel IoT Solutions Alliance, AAEON is excited to bring this new technology to market with the ARES-WHI0 industrial ATX server board and other future products.

The 3rd Generation Intel Xeon Scalable Processors deliver the next generation of high-end computing performance and support for vital data integrity and security technologies. This new generation of Xeon SP brings higher processing speeds and Intel's Deep Learning Boost technology, allowing for greater acceleration and more efficient processing for AI server applications.

11th Gen Intel Core Unleashes Unmatched Overclocking, Game Performance

The 11th Gen Intel Core S-series desktop processors (code-named "Rocket Lake-S") launched worldwide today, led by the flagship Intel Core i9-11900K. Reaching speeds of up to 5.3 GHz with Intel Thermal Velocity Boost, the Intel Core i9-11900K delivers even more performance to gamers and PC enthusiasts.

Engineered on the new Cypress Cove architecture, 11th Gen Intel Core S-series desktop processors are designed to transform hardware and software efficiency and increase raw gaming performance. The new architecture brings up to 19% gen-over-gen instructions per cycle (IPC) improvement for the highest frequency cores and adds Intel UHD graphics featuring the Intel Xe graphics architecture for rich media and intelligent graphics capabilities. That matters because games and most applications continue to depend on high-frequency cores to drive high frame rates and low latency. Designed to Game: With its new 11th Gen desktop processors, Intel continues to push desktop gaming performance to the limits and deliver the most amazing immersive experiences for players everywhere.
Read the TechPowerUp Reviews of the Core i9-11900K and Core i5-11600K.

Next-Generation Nintendo Switch SoC to be Powered by NVIDIA's Ada Lovelace GPU Architecture

Nintendo's Switch console is one of the most successful consoles ever made by the Japanese company. It has sold in millions of units and has received great feedback from the gaming community. However, as the hardware inside the console becomes outdated, the company is thinking about launching a new revision of the console, with the latest hardware and technologies. Today, we got ahold of information about the graphics side of things in Nintendo's upcoming console. Powered by NVIDIA Tegra SoC, it will incorporate unknown Arm-based CPU cores. The latest rumors suggest that the CPU will be accommodated with NVIDIA's Ada Lovelace GPU architecture. According to @kopite7kimi, a known hardware leaker, who simply replied to VideoCardz's tweet with "Ada", we are going to see the appearance of Ada Lovelace GPU architecture in the new SoC. Additionally, the new Switch SoC will have hardware accelerated NVIDIA Deep Learning Super Sampling (DLSS) and 4K output.

Intel Launches 11th Gen Core "Rocket Lake": Unmatched Overclocking and Gaming Performance

The 11th Gen Intel Core S-series desktop processors (code-named "Rocket Lake-S") launched worldwide today, led by the flagship Intel Core i9-11900K. Reaching speeds of up to 5.30 GHz with Intel Thermal Velocity Boost, the Intel Core i9-11900K delivers even more performance to gamers and PC enthusiasts.

Engineered on the new Cypress Cove architecture, 11th Gen Intel Core S-series desktop processors are designed to transform hardware and software efficiency and increase raw gaming performance​. The new architecture brings up to 19% gen-over-gen instructions per cycle (IPC) improvement for the highest frequency cores and adds Intel UHD graphics featuring the Intel Xe graphics architecture for rich media and intelligent graphics capabilities. That matters because games and most applications continue to depend on high-frequency cores to drive high frame rates and low latency.

NVIDIA's DLSS 2.0 is Easily Integrated into Unreal Engine 4.26 and Gives Big Performance Gains

When NVIDIA launched the second iteration of its Deep Learning Super Sampling (DLSS) technique used to upscale lower resolutions using deep learning, everyone was impressed by the quality of the rendering it is putting out. However, have you ever wondered how it all looks from the developer side of things? Usually, games need millions of lines of code and even some small features are not so easy to implement. Today, thanks to Tom Looman, a game developer working with Unreal Engine, we have found out just how the integration process of DLSS 2.0 looks like, and how big are the performance benefits coming from it.

Inthe blog post, you can take a look at the example game shown by the developer. The integration with Unreal Engine 4.26 is easy, as it just requires that you compile your project with a special UE4 RTX branch, and you need to apply your AppID which you can apply for at NVIDIA's website. Right now you are probably wondering how is performance looking like. Well, the baseline for the result was TXAA sampling techniques used in the game demo. The DLSS 2.0 has managed to bring anywhere from 60-180% increase in frame rate, depending on the scene. These are rather impressive numbers and it goes to show just how well NVIDIA has managed to build its DLSS 2.0 technology. For a full overview, please refer to the blog post.

NVIDIA Announces the GeForce RTX 3060, $330, 12 GB of GDDR6

NVIDIA today announced that it is bringing the NVIDIA Ampere architecture to millions more PC gamers with the new GeForce RTX 3060 GPU. With its efficient, high-performance architecture and the second generation of NVIDIA RTX, the RTX 3060 brings amazing hardware raytracing capabilities and support for NVIDIA DLSS and other technologies, and is priced at $329.

NVIDIA's 60-class GPUs have traditionally been the single most popular cards for gamers on Steam, with the GTX 1060 long at the top of the GPU gaming charts since its introduction in 2016. An estimated 90 percent of GeForce gamers currently play with a GTX-class GPU. "There's unstoppable momentum behind raytracing, which has quickly redefined the new standard of gaming," said Matt Wuebbling, vice president of global GeForce marketing at NVIDIA. "The NVIDIA Ampere architecture has been our fastest-selling ever, and the RTX 3060 brings the strengths of the RTX 30 Series to millions more gamers everywhere."

NVIDIA Updates Cyberpunk 2077, Minecraft RTX, and 4 More Games with DLSS

NVIDIA's Deep Learning Super Sampling (DLSS) technology uses advanced methods to offload sampling in games to the Tensor Cores, dedicated AI processors that are present on all of the GeForce RTX cards, including the prior Turing generation and now Ampere. NVIDIA promises that the inclusion of DLSS is promising to deliver up to a 40% performance boost, or even more. Today, the company has announced that DLSS is getting support in Cyberpunk 2077, Minecraft RTX, Mount & Blade II: Bannerlord, CRSED: F.O.A.D., Scavengers, and Moonlight Blade. The inclusion of these titles is now making NVIDIA's DLSS technology present in a total of 32 titles, which is no small feat for new technology.
Below, you can see the company provided charts about the performance of DLSS inclusion in the new titles, except the Cyberpunk 2077.
Update: The Cyberpunk 2077 performance numbers were leaked (thanks to kayjay010101 on TechPowerUp Forums), and you can check them out as well.

NVIDIA Brings DLSS Support To Four New Games

Artificial intelligence is revolutionizing gaming - from in-game physics and animation simulation, to real-time rendering and AI-assisted broadcasting features. And NVIDIA is at the forefront of this field, bringing gamers, scientists and creators incredible advancements. With Deep Learning Super Sampling (DLSS), NVIDIA set out to redefine real-time rendering through AI-based super resolution - rendering fewer pixels, then using AI to construct sharp, higher resolution images, giving gamers previously unheard-of performance gains.

Powered by dedicated AI processors on GeForce RTX GPUs called Tensor Cores, DLSS has accelerated performance in more than 25 games to date, boosting frame rates significantly, ensuring GeForce RTX gamers receive high-performance gameplay at the highest resolutions and detail settings, and when using immersive ray-traced effects. And now, NVIDIA has delivered four new DLSS titles for gamers to enjoy.

Intel Confirms Rocket Lake-S Features Cypress Cove with Double-Digit IPC Increase

Today, Intel has decided to surprise us and give an update to its upcoming CPU lineup for desktop. With the 11th generation, Core CPUs codenamed Rocket Lake-S, Intel is preparing to launch the new lineup in the first quarter of 2021. This means that we are just a few months away from this launch. When it comes to the architecture of these new processors, they are going to be based on a special Cypress Cove design. Gone are the days of Skylake-based designs that were present from the 6th to 10th generation processors. The Cypress Cove, as Intel calls it, is an Ice Lake adaptation. Contrary to the previous rumors, it is not an adaptation of Tiger Lake Willow Cove, but rather Ice Lake Sunny Cove.

The CPU instruction per cycle (IPC) is said to grow in double-digits, meaning that the desktop users are finally going to see an improvement that is not only frequency-based. While we do not know the numbers yet, we can expect them to be better than the current 10th gen parts. For the first time on the Intel platform for desktops, we will see the adoption of PCIe 4.0 chipset, which will allow for much faster SSD speeds and support the latest GPUs, specifically, there will be 20 PCIe 4.0 lanes coming from the CPU only. The CPU will be paired with 12th generation Xe graphics, like the one found in Tiger Lake CPUs. Other technologies such as Deep Learning Boost and VNNI, Quick Sync Video, and better overclocking tuning will be present as well. Interesting thing to note here is that the 10C/20T Core i9-10900K has a PL1 headroom of 125 W, and 250 W in PL2. However, the 8C/16T Rocket Lake-S CPU also features 125 W headroom in PL1, and 250 W in PL2. This indicates that the new Cypress Cove design runs hotter than the previous generation.

NVIDIA to Build Fastest AI Supercomputer in Academia

The University of Florida and NVIDIA Tuesday unveiled a plan to build the world's fastest AI supercomputer in academia, delivering 700 petaflops of AI performance. The effort is anchored by a $50 million gift: $25 million from alumnus and NVIDIA co-founder Chris Malachowsky and $25 million in hardware, software, training and services from NVIDIA.

"We've created a replicable, powerful model of public-private cooperation for everyone's benefit," said Malachowsky, who serves as an NVIDIA Fellow, in an online event featuring leaders from both the UF and NVIDIA. UF will invest an additional $20 million to create an AI-centric supercomputing and data center.

GeForce NOW Gains NVIDIA DLSS 2.0 Support In Latest Update

NVIDIA's game streaming service GeForce NOW has gained support for NVIDIA Deep Learning Super Sampling (DLSS) 2.0 in the latest update. DLSS 2.0 uses the tensor cores found in RTX series graphics cards to render games at a lower resolution and then use custom AI to construct sharp, higher resolution images. The introduction of DLSS 2.0 to GeForce NOW should allow for graphics quality to be improved on existing server hardware and deliver a smoother stutter-free gaming experience. NVIDIA announced that Control would be the first game on the platform to support DLSS 2.0, with additional games such as MechWarrior 5: Mercenaries and Deliver Us The Moon to support the feature in the future.

Intel Announces 10th Gen Core X Series and Revised Pricing on Xeon-W Processors

Intel today unveiled its latest lineup of Intel Xeon W and X-series processors, which puts new classes of computing performance and AI acceleration into the hands of professional creators and PC enthusiasts. Custom-designed to address the diverse needs of these growing audiences, the new Xeon W-2200 and X-series processors are targeted to be available starting November, along with a new pricing structure that represents an easier step up for creators and enthusiasts from Intel Core S-series mainstream products.

Intel is the only company that delivers a full portfolio of products precision-tuned to handle the sustained compute-intensive workloads used by professional creators and enthusiasts every day. The new Xeon W-2200 and X-series processors take this to the next level, as the first high-end desktop PC and mainstream workstations to feature AI acceleration with the integration of Intel Deep Learning Boost. This offers an AI inference boost of 2.2 times more compared with the prior generation. Additionally, this new lineup features Intel Turbo Boost Max Technology 3.0, which has been further enhanced to help software, such as for simulation and modeling, run as fast as possible by identifying and prioritizing the fastest available cores.

Next-generation Intel Xeon Scalable Processors to Deliver Breakthrough Platform Performance with up to 56 Processor Cores

Intel today announced its future Intel Xeon Scalable processor family (codename Cooper Lake) will offer customers up to 56 processor cores per socket and built-in AI training acceleration in a standard, socketed CPU as part of its mainline Intel Xeon Scalable platforms, with availability in the first half of 2020. The breakthrough platform performance delivered within the high-core-count Cooper Lake processors will leverage the capabilities built into the Intel Xeon Platinum 9200 series, which today is gaining momentum among the world's most demanding HPC customers, including HLRN, Advania, 4Paradigm, and others.

"The Intel Xeon Platinum 9200 series that we introduced as part of our 2nd Generation Intel Xeon Scalable processor family generated a lot of excitement among our customers who are deploying the technology to run their high-performance computing (HPC), advanced analytics, artificial intelligence and high-density infrastructure. Extended 56-core processor offerings into our mainline Intel Xeon Scalable platforms enables us to serve a much broader range of customers who hunger for more processor performance and memory bandwidth."
-Lisa Spelman, vice president and general manager of Data Center Marketing, Intel Corporation

Intel Internal Memo Reveals that even Intel is Impressed by AMD's Progress

Today an article was posted on Intel's internal employee-only portal called "Circuit News". The post, titled "AMD competitive profile: Where we go toe-to-toe, why they are resurgent, which chips of ours beat theirs" goes into detail about the recent history of AMD and how the company achieved its tremendous growth in recent years. Further, Intel talks about where they see the biggest challenges with AMD's new products, and what the company's "secret sauce" is to fight against these improvements.
The full article follows:

Intel Reports First-Quarter 2019 Financial Results

Intel Corporation today reported first-quarter 2019 financial results. "Results for the first quarter were slightly higher than our January expectations. We shipped a strong mix of high performance products and continued spending discipline while ramping 10nm and managing a challenging NAND pricing environment. Looking ahead, we're taking a more cautious view of the year, although we expect market conditions to improve in the second half," said Bob Swan, Intel CEO. "Our team is focused on expanding our market opportunity, accelerating our innovation and improving execution while evolving our culture. We aim to capitalize on key technology inflections that set us up to play a larger role in our customers' success, while improving returns for our owners."

In the first quarter, the company generated approximately $5.0 billion in cash from operations, paid dividends of $1.4 billion and used $2.5 billion to repurchase 49 million shares of stock. In the first quarter, Intel achieved 4 percent growth in the PC-centric business while data-centric revenue declined 5 percent.

Intel Announces Broadest Product Portfolio for Moving, Storing, and Processing Data

Intel Tuesday unveiled a new portfolio of data-centric solutions consisting of 2nd-Generation Intel Xeon Scalable processors, Intel Optane DC memory and storage solutions, and software and platform technologies optimized to help its customers extract more value from their data. Intel's latest data center solutions target a wide range of use cases within cloud computing, network infrastructure and intelligent edge applications, and support high-growth workloads, including AI and 5G.

Building on more than 20 years of world-class data center platforms and deep customer collaboration, Intel's data center solutions target server, network, storage, internet of things (IoT) applications and workstations. The portfolio of products advances Intel's data-centric strategy to pursue a massive $300 billion data-driven market opportunity.

Anthem Gets NVIDIA DLSS and Highlights Support in Latest Update

Saying Anthem has had a rough start would be an understatement, but things can only get better with time (hopefully, anyway). This week saw an update to the PC version that brought along with it support for NVIDIA's new DLSS (Deep Learning Super Sampling) technology to be used with their new Turing-microarchitecture GeForce RTX cards. NVIDIA's internal testing shows as much as 40% improvement in average FPS with DLSS on relative to off, as seen in the image below, and there is also a video to help show graphical changes, or lack thereof in this case. DLSS on Anthem is available on all RTX cards at 3840x2160 resolution gameplay, and on the RTX 2060, 2070, and 2080 at 2560x1440. No word on equivalent resolutions at a non-16:9 aspect ratio, and presumably 1080p is a no-go as first discussed by us last month.

Note that we will NOT be able to test DLSS on Anthem, which is a result of the five activations limit as far as hardware configurations go. This prevented us from doing a full graphics card performance test, but our article on the VIP demo is still worth checking into if you were curious. In addition to DLSS, Anthem also has NVIDIA Highlights support for GeForce Experience users to automatically capture and save "best gameplay moments", with a toggle option to enable this setting in the driver. A highlight is generated for an apex kill, boss kill, legendary kill, multi kill, overlook interaction, or a tomb discovery. More on this in the source linked below in the full story.

3DMark Adds NVIDIA DLSS Feature Performance Test to Port Royal

Did you see the NVIDIA keynote presentation at CES this year? For us, one of the highlights was the DLSS demo based on our 3DMark Port Royal ray tracing benchmark. Today, we're thrilled to announce that we've added this exciting new graphics technology to 3DMark in the form of a new NVIDIA DLSS feature test. This new test is available now in 3DMark Advanced and Professional Editions.

3DMark feature tests are specialized tests for specific technologies. The NVIDIA DLSS feature test helps you compare performance and image quality with and without DLSS processing. The test is based on the 3DMark Port Royal ray tracing benchmark. Like many games, Port Royal uses Temporal Anti-Aliasing. TAA is a popular, state-of-the-art technique, but it can result in blurring and the loss of fine detail. DLSS (Deep Learning Super Sampling) is an NVIDIA RTX technology that uses deep learning and AI to improve game performance while maintaining visual quality.

NVIDIA Presents the TITAN RTX 24GB Graphics Card at $2,499

NVIDIA today introduced NVIDIA TITAN RTX , the world's most powerful desktop GPU, providing massive performance for AI research, data science and creative applications. Driven by the new NVIDIA Turing architecture, TITAN RTX - dubbed T-Rex - delivers 130 teraflops of deep learning performance and 11 GigaRays of ray-tracing performance.

"Turing is NVIDIA's biggest advance in a decade - fusing shaders, ray tracing, and deep learning to reinvent the GPU," said Jensen Huang, founder and CEO of NVIDIA. "The introduction of T-Rex puts Turing within reach of millions of the most demanding PC users - developers, scientists and content creators."
Return to Keyword Browsing
Dec 20th, 2024 03:28 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts