- Joined
- Oct 9, 2007
- Messages
- 47,244 (7.55/day)
- Location
- Hyderabad, India
System Name | RBMK-1000 |
---|---|
Processor | AMD Ryzen 7 5700G |
Motherboard | ASUS ROG Strix B450-E Gaming |
Cooling | DeepCool Gammax L240 V2 |
Memory | 2x 8GB G.Skill Sniper X |
Video Card(s) | Palit GeForce RTX 2080 SUPER GameRock |
Storage | Western Digital Black NVMe 512GB |
Display(s) | BenQ 1440p 60 Hz 27-inch |
Case | Corsair Carbide 100R |
Audio Device(s) | ASUS SupremeFX S1220A |
Power Supply | Cooler Master MWE Gold 650W |
Mouse | ASUS ROG Strix Impact |
Keyboard | Gamdias Hermes E2 |
Software | Windows 11 Pro |
On Sunday, NVIDIA announced Project G-Assist, the AI chatbot for gamers that can be pulled up in the middle of the gameplay, and sought help from. You could just pause your game and Google for help, but G-Assist can be activated in game, and is situationally aware of your game (e.g.: it knows where you're stuck and how to help you out). It won't play the game for you, but give you actionable info on how to play, or improve. For example, you could ask it how to craft a particular weapon, and where to find the items needed in game, and get concise guidance. G-Assist also knows about your graphics card, framerates and other telemetry data.
We went hands-on with G-Assist, and found that it's very capable of doing the things NVIDIA claims it can, short of playing the game for you. They are showing a demo of ARK Survival, but the impressive part is that there's no integration of G-Assist in Ark—rather is runs as an injected overlay that can capture user input. This means that G-Assist can work in ANY game, even without official support. We also learned how G-Assist works under-the-hood, particularly how the chatbot is situationally aware of your game, and it's fascinating.
Besides an AI model to recognize spoken text, it runs multiple computer vision models to understand what's going on in-game. The first one is an OCR model that recognizes text on screen—like mission objectives, NPC names, etc. On top of that, another model is used to recognize objects in game—like enemy types. Since it's "seeing" your gameplay, it can tally what it sees with its vast pre-trained model of information, to come up with answers that are tailored to the last word. NVIDIA says that the performance cost of having G-Assist running is very low, since unlike DLSS, the application isn't sitting inside the graphics rendering pipeline, it's leisurely seeing frames the way a screen recording/streaming software would, and runs the compute-intensive model operations only after you give it a query to answer. To achieve that it keeps a log of previous frames that it encounters and only analyzes them when necessary.
Besides being an assistive AI, it can also take actions to make the right graphics and game settings. For that it is integrated with GeForce Experience Optimal Settings. For example, and NVIDIA demonstrated that working live in the demo, you could tell it to improve your framerates, either by overclocking, or changing details settings. You could also ask it to enable DLSS, or undervolt the GPU. Since it has access to live telemetry from the GPU, you can also request a chart of latency, power usage or GPU load. NVIDIA made it clear that this is a tech demo that's designed to show game developers what's possible in games if they integrated an AI-powered assistant.
View at TechPowerUp Main Site
We went hands-on with G-Assist, and found that it's very capable of doing the things NVIDIA claims it can, short of playing the game for you. They are showing a demo of ARK Survival, but the impressive part is that there's no integration of G-Assist in Ark—rather is runs as an injected overlay that can capture user input. This means that G-Assist can work in ANY game, even without official support. We also learned how G-Assist works under-the-hood, particularly how the chatbot is situationally aware of your game, and it's fascinating.
Besides an AI model to recognize spoken text, it runs multiple computer vision models to understand what's going on in-game. The first one is an OCR model that recognizes text on screen—like mission objectives, NPC names, etc. On top of that, another model is used to recognize objects in game—like enemy types. Since it's "seeing" your gameplay, it can tally what it sees with its vast pre-trained model of information, to come up with answers that are tailored to the last word. NVIDIA says that the performance cost of having G-Assist running is very low, since unlike DLSS, the application isn't sitting inside the graphics rendering pipeline, it's leisurely seeing frames the way a screen recording/streaming software would, and runs the compute-intensive model operations only after you give it a query to answer. To achieve that it keeps a log of previous frames that it encounters and only analyzes them when necessary.
Besides being an assistive AI, it can also take actions to make the right graphics and game settings. For that it is integrated with GeForce Experience Optimal Settings. For example, and NVIDIA demonstrated that working live in the demo, you could tell it to improve your framerates, either by overclocking, or changing details settings. You could also ask it to enable DLSS, or undervolt the GPU. Since it has access to live telemetry from the GPU, you can also request a chart of latency, power usage or GPU load. NVIDIA made it clear that this is a tech demo that's designed to show game developers what's possible in games if they integrated an AI-powered assistant.
View at TechPowerUp Main Site