Wednesday, May 1st 2024

We Tested NVIDIA's new ChatRTX: Your Own GPU-accelerated AI Assistant with Photo Recognition, Speech Input, Updated Models

NVIDIA today unveiled ChatRTX, the AI assistant that runs locally on your machine, and which is accelerated by your GeForce RTX GPU. NVIDIA had originally launched this as "Chat with RTX" back in February 2024, back then this was regarded more as a public tech demo. We reviewed the application in our feature article. The ChatRTX rebranding is probably aimed at making the name sound more like ChatGPT, which is what the application aims to be—except it runs completely on your machine, and is exhaustively customizable. The most obvious advantage of a locally-run AI assistant is privacy—you are interacting with an assistant that processes your prompt locally, and accelerated by your GPU; the second is that you're not held back by performance bottlenecks by cloud-based assistants.

ChatRTX is a major update over the Chat with RTX tech-demo from February. To begin with, the application has several stability refinements from Chat with RTX, which felt a little rough on the edges. NVIDIA has significantly updated the LLMs included with the application, including Mistral 7B INT4, and Llama 2 7B INT4. Support is also added for additional LLMs, including Gemma, a local LLM trained by Google, based on the same technology used to make Google's flagship Gemini model. ChatRTX now also supports ChatGLM3, for both English and Chinese prompts. Perhaps the biggest upgrade ChatRTX is its ability to recognize images on your machine, as it incorporates CLIP (contrastive language-image pre-training) from OpenAI. CLIP is an LLM that recognizes what it's seeing in image collections. Using this feature, you can interact with your image library without the need for metadata. ChatRTX doesn't just take text input—you can speak to it. It now accepts natural voice input, as it integrates the Whisper speech-to-text NLI model.
DOWNLOAD: NVIDIA ChatRTX

As with the original Chat with RTX tech-demo, the new ChatRTX application's biggest feature is its ability to let users switch between AI models, or to let it build and train your own dataset based on text and images on your local machine. You can point it to a folder with documents such as plaintext, Word (doc) and PDFs, as well as images; and it will train itself on answering queries related to the dataset.


There remain some major limitations with ChatRTX which we had hoped would be fixed since its February release, and that is context—the ability to ask follow-up questions. Apparently, follow-ups are harder to implement than they seem, as the model has to connect the new question to the previous ones and its responses to them. It's also inaccurate in attributing its responses to the right text tiles. The browser-based frontend only supports Chrome and Edge, it's buggy with Firefox.
Add your own comment

25 Comments on We Tested NVIDIA's new ChatRTX: Your Own GPU-accelerated AI Assistant with Photo Recognition, Speech Input, Updated Models

#1
dgianstefani
TPU Proofreader
V cool, gonna try it out later today
Posted on Reply
#2
Denver
The level of trickery only increases.

All that's left is for Nvidia to create an assistant with the appearance of an Anime character to further captivate lonely nerds and keep them on a leash. If anyone wants, feel free to write down the idea for a film.
Posted on Reply
#3
Yukikaze
DenverThe level of trickery only increases.

All that's left is for Nvidia to create an assistant with the appearance of an Anime character to further captivate lonely nerds and keep them on a leash. If anyone wants, feel free to write down the idea for a film.
Her (film) - Wikipedia
Posted on Reply
#4
bonehead123
And so it begins....

Have rope, will strangle....

First they captured the AI cloud-base, now they are wiggling their way into your pc's insides, which will spawn the command & control of all pc's everywhere, all the time, all at once...

And then, all that will remain is "Hello SkyNet, how can I help you infiltrate & destroy humanity today ?

No thanks, cause resistance is NOT futile !
Posted on Reply
#5
TechKilledMe
Nice, I will have to check this out. Wonder how much VRAM matters for this. 16GB seems to be the bare minimum for any sort of AI work from my limited knowledge.
Posted on Reply
#6
john_
DenverThe level of trickery only increases.

All that's left is for Nvidia to create an assistant with the appearance of an Anime character to further captivate lonely nerds and keep them on a leash. If anyone wants, feel free to write down the idea for a film.
In my opinion the reason why Cortana failed in Windows. Imagine having the HALO hologram on screen when wanting some assistance instead of some text or plain audio.
Posted on Reply
#7
Shihab
I find it amusing that a company whose entire business is built on top of graphics, can't be bothered to develop a decent, native GUI instead of this webserver-based crap.
Oh well... At least now people can get their wrong and nonsensical answers without risking [much of] their privacy.
john_Imagine having the HALO hologram on screen when wanting some assistance instead of some text or plain audio.
We had better.

bonehead123now they are wiggling their way into your pc's insides,
I have some really bad news for you, mate...
Posted on Reply
#8
Assimilator
This makes me crave the sweet release of death.
Posted on Reply
#9
evernessince
Meh, just get GPT4All. Better and not locked down to Nvidia only.
Posted on Reply
#10
Eternit
It would be great to be able to buy next year a gaming PC which is not an AI PC.
Posted on Reply
#11
Assimilator
EternitIt would be great to be able to buy next year a gaming PC which is not an AI PC.
Good luck with that. They're stuffing "AI" into anything they can, I'm honestly amazed there aren't AI dildos yet.
Posted on Reply
#12
dgianstefani
TPU Proofreader
AssimilatorGood luck with that. They're stuffing "AI" into anything they can, I'm honestly amazed there aren't AI dildos yet.
They exist according to the Kagi search I just did.

Glad I didn't use Google or I might start getting targeted adverts...
Posted on Reply
#13
dtoxic
AI this AI that...and here i am just wanting a reasonably good and affordable GPU without a need to sell a kidney.
Posted on Reply
#14
the-last-englishman
Just tried running it on my local photo library, it errors out after 30 minutes of scanning the library.
Posted on Reply
#15
bug
Windows-only. Shove it, Nvidia.
Posted on Reply
#17
bonehead123
AssimilatorI'm honestly amazed there aren't AI dildos yet
"I am fully functional, and programmed in multiple techniques" - Lt. Cmdr Data :D

'nuff said !
Posted on Reply
#18
Denver
YukikazeHer (film) - Wikipedia
Nah, I was pondering something more profound and ominous, envisioning the CEO in leather jacket entering a sinister pact with pharmaceutical companies. Their aim? To employ any means necessary to tether people to their screens, fostering obesity to peddle weight loss pills as miraculous solution. You can take it, Netflix, :P
Posted on Reply
#19
Vayra86
PhilaphlousHoly cow... 11.6GB...????
Fits in an x70's VRAM!

Until the next update
DenverNah, I was pondering something more profound and ominous, envisioning the CEO in leather jacket entering a sinister pact with pharmaceutical companies. Their aim? To employ any means necessary to tether people to their screens, fostering obesity to peddle weight loss pills as miraculous solution. You can take it, Netflix, :p
All that's left for you to do is find a link that says Huang tried Ozempic and you're up for internet hero status. The ingredients are all there, live and in effect already... :D Who needs Netflix?
Posted on Reply
#20
Gmr_Chick
AssimilatorGood luck with that. They're stuffing "AI" into anything they can, I'm honestly amazed there aren't AI dildos yet.
dgianstefaniThey exist according to the Kagi search I just did.

Glad I didn't use Google or I might start getting targeted adverts...
"AI" in dildos now? Is NOTHING sacred? :cry: It's a good thing I'm more of a "hands on" kind of gal... Go to hell, Skynet Dildo!
Posted on Reply
#21
Denver
Vayra86Fits in an x70's VRAM!

Until the next update


All that's left for you to do is find a link that says Huang tried Ozempic and you're up for internet hero status. The ingredients are all there, live and in effect already... :D Who needs Netflix?
I'll have my GPT henchman do it;
Placing creation against the creator is a cliché but it works. :toast:
Posted on Reply
#22
stimpy88
Vayra86Fits in an x70's VRAM!

Until the next update
12.9GB after decompression, so no, you need an upgrade! nGreedia will kindly help you out though. ;)
Posted on Reply
#23
mb194dc
But won't someone think about Clippy ?

Can copilot be run locally or is that coming?
Posted on Reply
#24
bug
mb194dcBut won't someone think about Clippy ?

Can copilot be run locally or is that coming?
I think they can all run locally, but they eat too much RAM.
Remember the debacle around Gemini on Pixels, when Google wanted to leave out the 8 because of "hardware limitations", despite it being exactly the same as the Pro, only with 8GB RAM instead of 12?

I believe what is going on here is companies figuring out ways to shrink their models while still offering useful functionality or otherwise compressing the models better, before declaring them ready to run on local machines.
Posted on Reply
Add your own comment
Dec 21st, 2024 12:24 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts