• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA Project G-Assist Now Available in NVIDIA App

GFreeman

News Editor
Staff member
Joined
Mar 6, 2023
Messages
1,806 (2.40/day)
At Computex 2024, we showcased Project G-Assist - a tech demo that offered a glimpse of how AI assistants could elevate the PC experience for gamers, creators, and more. Today, we're releasing an experimental version of the Project G-Assist System Assistant feature for GeForce RTX desktop users, via NVIDIA app, with GeForce RTX laptop support coming in a future update. As modern PCs become more powerful, they also grow more complex to operate. Users today face over a trillion possible combinations of hardware and software settings when configuring a PC for peak performance - spanning the GPU, CPU, motherboard, monitors, peripherals, and more.

We built Project G-Assist, an AI assistant that runs locally on GeForce RTX AI PCs, to simplify this experience. G-Assist helps users control a broad range of PC settings, from optimizing game and system settings, charting frame rates and other key performance statistics, to controlling select peripherals settings such as lighting - all via basic voice or text commands.



Project G-Assist System Assistant
Project G-Assist uses a specially tuned Small Language Model (SLM) to efficiently interpret natural language instructions, and call a variety of NVIDIA and third-party PC APIs to execute actions on the PC.

G-Assist can provide real-time diagnostics and recommendations to alleviate system bottlenecks, improve power efficiency, optimize game settings, overclock your GPU, and much more.


It can chart and export various performance metrics, such as FPS, latency, GPU utilization, temperatures, among others.


It can answer questions about your PC hardware, or about NVIDIA software onboard your GeForce RTX GPU.


G-Assist can even control select peripherals and software applications with simple commands - enabling users to benchmark or adjust fan speeds, or change lighting on supported Logitech G, Corsair, MSI, and Nanoleaf devices.


Project G-Assist uses a third party SLM designed to run locally; it is not intended to be a broad conversational AI. To get the best results with Project G-Assist, refer to the list of supported functions, which will be updated as new commands and capabilities are added.

On-Device AI
Unlike massive cloud-hosted AI models that require online access and paid subscriptions, G-Assist runs on your GeForce RTX GPU. This means it is responsive, free to use, and can run offline.

Under the hood, G-Assist now uses a Llama-based Instruct model with 8 billion parameters, packing language understanding into a tiny fraction of the size of today's large scale AI models. This allows G-Assist to run locally on GeForce RTX hardware. And with the rapid pace of SLM research, these compact models are becoming more capable and efficient every few months.

When G-Assist is prompted for help by pressing Alt+G - say, to optimize graphics settings or check GPU temperatures - your GeForce RTX GPU briefly allocates a portion of its horsepower to AI inference. If you're simultaneously gaming or running another GPU-heavy application, a short dip in render rate or inference completion speed may occur during those few seconds. Once G-Assist finishes its task, the GPU returns to delivering full performance to the game or app.

Project G-Assist requires the following PC components and operating system:
  • Operating System: Windows 10, Windows 11
  • GPU: GeForce RTX 30, 40, and 50 Series Desktop GPUs with 12 GB VRAM or Higher
  • CPU: Intel Pentium G Series, Core i3, i5, i7, or higher, AMD FX, Ryzen 3, 5, 7, 9, Threadripper or higher
  • Disk Space Required: System Assistant: 6.5 GB, Voice Commands: 3 GB
  • Driver: GeForce 572.83 driver, or later
  • Language: English

Project G-Assist launches with support for desktop GPUs, with laptop support coming in a future update. You can find a full list of G-Assist system requirements, including those for partner peripherals here.

Powering Assistants For ISVs & Community Developers
G-Assist is built with NVIDIA ACE—the same AI tech suite game developers use to breathe life into NPCs. OEMs and ISVs are already leveraging ACE technology to create custom AI Assistants like G-Assist.

For example, MSI unveiled the "AI Robot" engine at CES, designed to power AI Assistants built into MSI Center and MSI Afterburner. Logitech is using ACE to develop the Streamlabs Intelligent AI Assistant, complete with an interactive avatar that can chat with the streamer, comment on gameplay, and more. And HP is also working on leveraging ACE for AI assistant capabilities in Omen Gaming Hub.

AI developers and enthusiasts can also leverage and extend the capabilities of G-Assist.

G-Assist was built for community-driven expansion. To get started, NVIDIA has published a GitHub repository with samples and instructions for creating plugins that add new functionality. Community developers can define functions in simple JSON formats and drop config files into a designated directory, allowing G-Assist to automatically load and interpret them. Developers can even submit plugins to NVIDIA for review and potential inclusion, making these new capabilities available for others.

Currently available sample plugins include Spotify, to enable hands-free music and volume control, and Google Gemini, allowing G-Assist to invoke a much larger cloud-based AI for more complex conversations, brainstorming, or web searches using a free Google AI Studio API key. In the clip below, you'll see G-Assist ask Gemini about which Legend to pick in Apex Legends when solo queueing, and whether it's wise to jump into Nightmare mode at level 25 in Diablo IV.


For even more customization, NVIDIA published instructions in the GitHub Repository to help users generate G-Assist plugins using a ChatGPT-based "Plugin Builder". With this tool, users can use AI to generate properly formatted code, then integrate it into G-Assist—enabling quick, AI-assisted functionality that responds to text and voice commands.

Watch how a developer used the Plugin Builder to create a Twitch Plugin for G-Assist. After using ChatGPT to generate the necessary JSON manifest and Python files, the developer simply drops them onto the designated directory. From there, G-Assist can instantly check if a streamer is online, returning real-time updates and viewer counts in response to commands like "Hey Twitch, is [streamer] live?"


Details on how to build, share, and load plugins are available in documentation on our GitHub repo.

NVIDIA is opening up the G-Assist framework to the broader AI community, and tools like CrewAI, Flowise, and LangFlow will be able to leverage G-Assist as a custom component in the future, enabling the community to integrate function-calling capabilities in low-code/no-code workflows, AI applications, and agentic flows.

We can't wait to see what the community dreams up! To learn more about plugins and community-built AI applications, check out NVIDIA's RTX AI Garage blog series.

Project G-Assist Available Now
Download Project G-Assist through NVIDIA app's Home tab, in the Discovery section. G-Assist currently supports GeForce RTX desktop GPUs, English language, and the voice and text commands listed here. In future updates, we'll continue to update and add G-Assist capabilities. Press Alt+G after installation to activate G-Assist.

Remember: your feedback fuels the future! G-Assist is an experimental feature in what small, local AI models sourced from the cutting edge of AI research can do. If you'd like to help shape the future of G-Assist, you can submit feedback by clicking the "Send Feedback" exclamation icon at the top right of the NVIDIA app window and selecting "Project G-Assist". Your insights will help us determine what improvements and features to pursue next.

View at TechPowerUp Main Site | Source
 
Joined
May 22, 2010
Messages
426 (0.08/day)
Processor R7-7700X
Motherboard Gigabyte X670 Aorus Elite AX
Cooling Scythe Fuma 2 rev B
Memory no name DDR5-5200
Video Card(s) Some 3080 10GB
Storage dual Intel DC P4610 1.6TB
Display(s) Gigabyte G34MQ + Dell 2708WFP
Case Lian-Li Lancool III black no rgb
Power Supply CM UCP 750W
Software Win 10 Pro x64
very funny, 12GB VRAM req, whilst they released the 3080 with 10GB....
 
Joined
Mar 1, 2021
Messages
595 (0.40/day)
Location
Germany
System Name Homebase
Processor Ryzen 5 5600
Motherboard Gigabyte Aorus X570S UD
Cooling Scythe Mugen 5 RGB
Memory 2*16 Kingston Fury DDR4-3600 double ranked
Video Card(s) AMD Radeon RX 6800 16 GB
Storage 1*512 WD Red SN700, 1*2TB Curcial P5, 1*2TB Sandisk Plus (TLC), 1*14TB Toshiba MG
Display(s) Philips E-line 275E1S
Case Fractal Design Torrent Compact
Power Supply Corsair RM850 2019
Mouse Sharkoon Sharkforce Pro
Keyboard Fujitsu KB955
Could someone inside TPU please test how much performance, if so, it does cost when enabled?
 
Joined
Mar 27, 2018
Messages
225 (0.09/day)
Processor AMD Ryzen 5 3600
Motherboard Asus ROG Strix X470-F
Cooling Reeven RC-1205
Memory G.Skill F4-3200C16D-16GTZKW TridentZ 16GB (2x8GB)
Video Card(s) Powercolor x470 red devil
Storage Mushkin MKNSSDPL500GB-D8 Pilot 500GB
Display(s) Samsung 23"
Case Phanteks PH-EC300PTG
Audio Device(s) SupremeFX S1220A
Power Supply Super Flower SF-650F14MT(BK) Leadex 650W 80 Plus Silver
Mouse Cooler master m530
Keyboard Cheapo
What useless crap is this don't they have anything better to do like optimise drivers.

Better yet I want a proper OSD with CPU temperature, Fan speed, and much more. How about nice new control panel integrated with the APP all in one.
 

duckface

New Member
Joined
May 16, 2024
Messages
25 (0.08/day)
a good LLM must have at least 4gb of vram consumption, this will consume at least 6gb of vram to be able to run well, it is not worth it, while companies do not launch video cards with 24gb in entry-level models to be used with AI, something that I think they will not do
 
Joined
Dec 14, 2006
Messages
548 (0.08/day)
System Name Ed-PC
Processor Intel i5-12600k
Motherboard Asus TUF Z690 PLUS Wifi D4
Cooling Noctua NH-14S
Memory Crucial Ballistix DDR4 C16@3600 16GB
Video Card(s) Nvidia MSI 970
Storage Samsung 980, 860evo
Case Lian Li Lancool II mesh Perf
Audio Device(s) onboard
Power Supply Corsair RM750x
Software Win10 Pro 64bit
Joined
Feb 23, 2019
Messages
6,367 (2.86/day)
Location
Poland
Processor Ryzen 7 5800X3D
Motherboard Gigabyte X570 Aorus Elite
Cooling Thermalright Phantom Spirit 120 SE
Memory 2x16 GB Crucial Ballistix 3600 CL16 Rev E @ 3600 CL14
Video Card(s) RTX3080 Ti FE
Storage SX8200 Pro 1 TB, Plextor M6Pro 256 GB, WD Blue 2TB
Display(s) LG 34GN850P-B
Case SilverStone Primera PM01 RGB
Audio Device(s) SoundBlaster G6 | Fidelio X2 | Sennheiser 6XX
Power Supply SeaSonic Focus Plus Gold 750W
Mouse Endgame Gear XM1R
Keyboard Wooting Two HE
Big red flag:
1742909303331.png
 
Joined
Nov 4, 2005
Messages
12,156 (1.72/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs, 24TB Enterprise drives
Display(s) 55" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
It recommends you purchase a new 5090.


See I did what it's programmed to do, now I need you all to mail me 12GB of your Vmem.
 
Joined
Dec 19, 2008
Messages
317 (0.05/day)
Location
WA, USA
System Name Desktop
Processor AMD Ryzen 5950X
Motherboard ASUS Strix B450-I
Cooling be quiet! Dark Rock TF 2
Memory 32GB DDR4 3600
Video Card(s) AMD RX 6800
Storage 480GB MyDigitalSSD NVME
Display(s) AOC CU34G2X
Power Supply 850w
Mouse Razer Basilisk V3
Keyboard Steelseries Apex 5

duckface

New Member
Joined
May 16, 2024
Messages
25 (0.08/day)
Could someone inside TPU please test how much performance, if so, it does cost when enabled?
an LLM to be able to see the image, understand it and respond to you costs a 3060ti at least, so you would have to have a 4090 at least to play with the capacity of a 3060 and have another 3060ti left for LLM to generate the responses without crashing everything
 
Top