Tuesday, March 25th 2025

NVIDIA Project G-Assist Now Available in NVIDIA App

At Computex 2024, we showcased Project G-Assist - a tech demo that offered a glimpse of how AI assistants could elevate the PC experience for gamers, creators, and more. Today, we're releasing an experimental version of the Project G-Assist System Assistant feature for GeForce RTX desktop users, via NVIDIA app, with GeForce RTX laptop support coming in a future update. As modern PCs become more powerful, they also grow more complex to operate. Users today face over a trillion possible combinations of hardware and software settings when configuring a PC for peak performance - spanning the GPU, CPU, motherboard, monitors, peripherals, and more.

We built Project G-Assist, an AI assistant that runs locally on GeForce RTX AI PCs, to simplify this experience. G-Assist helps users control a broad range of PC settings, from optimizing game and system settings, charting frame rates and other key performance statistics, to controlling select peripherals settings such as lighting - all via basic voice or text commands.
Project G-Assist System Assistant
Project G-Assist uses a specially tuned Small Language Model (SLM) to efficiently interpret natural language instructions, and call a variety of NVIDIA and third-party PC APIs to execute actions on the PC.

G-Assist can provide real-time diagnostics and recommendations to alleviate system bottlenecks, improve power efficiency, optimize game settings, overclock your GPU, and much more.


It can chart and export various performance metrics, such as FPS, latency, GPU utilization, temperatures, among others.


It can answer questions about your PC hardware, or about NVIDIA software onboard your GeForce RTX GPU.


G-Assist can even control select peripherals and software applications with simple commands - enabling users to benchmark or adjust fan speeds, or change lighting on supported Logitech G, Corsair, MSI, and Nanoleaf devices.


Project G-Assist uses a third party SLM designed to run locally; it is not intended to be a broad conversational AI. To get the best results with Project G-Assist, refer to the list of supported functions, which will be updated as new commands and capabilities are added.

On-Device AI
Unlike massive cloud-hosted AI models that require online access and paid subscriptions, G-Assist runs on your GeForce RTX GPU. This means it is responsive, free to use, and can run offline.

Under the hood, G-Assist now uses a Llama-based Instruct model with 8 billion parameters, packing language understanding into a tiny fraction of the size of today's large scale AI models. This allows G-Assist to run locally on GeForce RTX hardware. And with the rapid pace of SLM research, these compact models are becoming more capable and efficient every few months.

When G-Assist is prompted for help by pressing Alt+G - say, to optimize graphics settings or check GPU temperatures - your GeForce RTX GPU briefly allocates a portion of its horsepower to AI inference. If you're simultaneously gaming or running another GPU-heavy application, a short dip in render rate or inference completion speed may occur during those few seconds. Once G-Assist finishes its task, the GPU returns to delivering full performance to the game or app.

Project G-Assist requires the following PC components and operating system:
  • Operating System: Windows 10, Windows 11
  • GPU: GeForce RTX 30, 40, and 50 Series Desktop GPUs with 12 GB VRAM or Higher
  • CPU: Intel Pentium G Series, Core i3, i5, i7, or higher, AMD FX, Ryzen 3, 5, 7, 9, Threadripper or higher
  • Disk Space Required: System Assistant: 6.5 GB, Voice Commands: 3 GB
  • Driver: GeForce 572.83 driver, or later
  • Language: English
Project G-Assist launches with support for desktop GPUs, with laptop support coming in a future update. You can find a full list of G-Assist system requirements, including those for partner peripherals here.

Powering Assistants For ISVs & Community Developers
G-Assist is built with NVIDIA ACE—the same AI tech suite game developers use to breathe life into NPCs. OEMs and ISVs are already leveraging ACE technology to create custom AI Assistants like G-Assist.

For example, MSI unveiled the "AI Robot" engine at CES, designed to power AI Assistants built into MSI Center and MSI Afterburner. Logitech is using ACE to develop the Streamlabs Intelligent AI Assistant, complete with an interactive avatar that can chat with the streamer, comment on gameplay, and more. And HP is also working on leveraging ACE for AI assistant capabilities in Omen Gaming Hub.

AI developers and enthusiasts can also leverage and extend the capabilities of G-Assist.

G-Assist was built for community-driven expansion. To get started, NVIDIA has published a GitHub repository with samples and instructions for creating plugins that add new functionality. Community developers can define functions in simple JSON formats and drop config files into a designated directory, allowing G-Assist to automatically load and interpret them. Developers can even submit plugins to NVIDIA for review and potential inclusion, making these new capabilities available for others.

Currently available sample plugins include Spotify, to enable hands-free music and volume control, and Google Gemini, allowing G-Assist to invoke a much larger cloud-based AI for more complex conversations, brainstorming, or web searches using a free Google AI Studio API key. In the clip below, you'll see G-Assist ask Gemini about which Legend to pick in Apex Legends when solo queueing, and whether it's wise to jump into Nightmare mode at level 25 in Diablo IV.


For even more customization, NVIDIA published instructions in the GitHub Repository to help users generate G-Assist plugins using a ChatGPT-based "Plugin Builder". With this tool, users can use AI to generate properly formatted code, then integrate it into G-Assist—enabling quick, AI-assisted functionality that responds to text and voice commands.

Watch how a developer used the Plugin Builder to create a Twitch Plugin for G-Assist. After using ChatGPT to generate the necessary JSON manifest and Python files, the developer simply drops them onto the designated directory. From there, G-Assist can instantly check if a streamer is online, returning real-time updates and viewer counts in response to commands like "Hey Twitch, is [streamer] live?"


Details on how to build, share, and load plugins are available in documentation on our GitHub repo.

NVIDIA is opening up the G-Assist framework to the broader AI community, and tools like CrewAI, Flowise, and LangFlow will be able to leverage G-Assist as a custom component in the future, enabling the community to integrate function-calling capabilities in low-code/no-code workflows, AI applications, and agentic flows.

We can't wait to see what the community dreams up! To learn more about plugins and community-built AI applications, check out NVIDIA's RTX AI Garage blog series.

Project G-Assist Available Now
Download Project G-Assist through NVIDIA app's Home tab, in the Discovery section. G-Assist currently supports GeForce RTX desktop GPUs, English language, and the voice and text commands listed here. In future updates, we'll continue to update and add G-Assist capabilities. Press Alt+G after installation to activate G-Assist.

Remember: your feedback fuels the future! G-Assist is an experimental feature in what small, local AI models sourced from the cutting edge of AI research can do. If you'd like to help shape the future of G-Assist, you can submit feedback by clicking the "Send Feedback" exclamation icon at the top right of the NVIDIA app window and selecting "Project G-Assist". Your insights will help us determine what improvements and features to pursue next.
Source: NVIDIA
Add your own comment

7 Comments on NVIDIA Project G-Assist Now Available in NVIDIA App

#1
L'Eliminateur
very funny, 12GB VRAM req, whilst they released the 3080 with 10GB....
Posted on Reply
#2
Dragokar
Could someone inside TPU please test how much performance, if so, it does cost when enabled?
Posted on Reply
#3
Quicks
What useless crap is this don't they have anything better to do like optimise drivers.

Better yet I want a proper OSD with CPU temperature, Fan speed, and much more. How about nice new control panel integrated with the APP all in one.
Posted on Reply
#4
duckface
a good LLM must have at least 4gb of vram consumption, this will consume at least 6gb of vram to be able to run well, it is not worth it, while companies do not launch video cards with 24gb in entry-level models to be used with AI, something that I think they will not do
Posted on Reply
#5
Ed_1
L'Eliminateurvery funny, 12GB VRAM req, whilst they released the 3080 with 10GB....
Yeah, with 12g min only the highest 30 series are supported
Posted on Reply
#7
Steevo
It recommends you purchase a new 5090.


See I did what it's programmed to do, now I need you all to mail me 12GB of your Vmem.
Posted on Reply
Mar 25th, 2025 18:20 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts