Chat with NVIDIA RTX Tech Demo Review 75

Chat with NVIDIA RTX Tech Demo Review

(75 Comments) »

Conclusion

All in all, we're plenty impressed with the promise and potential of Chat with RTX. This should be super useful not just to interact with the native datasets of Llama2 and Mistral, but pretty much all information. Imagine you scrubbed all information on Wikipedia, and got it to give you straight answers on anything you want. The benefits for school and college students in particular are endless. What really makes Chat with RTX stand out is that it works completely offline (after the initial download), which not only ensures it will work when your internet is down, but it also means that none of your data leaves your PC.

For me, this is probably the biggest selling point. Data is valuable, everyone knows that. Human-generated data to feed back into your AI is even more valuable, which is part of the reason why I believe that the vast majority of AI experiences will be cloud connected. The company's precious AI algorithms will never reach your PC, which not only facilitates sophisticated data analysis, it also comes with significant privacy implications. Users must entrust their sensitive data to external servers for processing, raising concerns about potential breaches or unauthorized access. Moreover, the lack of direct control over these algorithms in the cloud can lead to privacy risks, as users may have limited visibility into how their data is being utilized and protected. This reliance on cloud computing underscores the delicate balance between harnessing advanced AI capabilities and safeguarding individual privacy rights.

This cloud-centric deployment of AI also affects hardware vendors who want to sell you their AI-optimized chips. Recent processor launches such as Intel Meteor Lake and AMD Ryzen 8000G have introduced hardware support for AI. While these processors may not offer the same level of AI compute power as NVIDIA RTX graphics cards, they demonstrate that chip designers are investing significant silicon area in response to the AI trend. For all these vendors it'll be a challenge to convince software developers to run their models on local hardware, which is why I think NVIDIA is investing in Chat with RTX, which is open-source by the way.

Considering an install size of almost 100 GB, I have serious doubts that today's release will see wide adoption rates, especially outside the tech expert scene. Even with NVIDIA's simplified UI and installer workflow, the current release is still too complicated for the average Joe to use. ChatGPT does a fantastic job hiding all the details, but if you've ever tried to set up a local installation of Stable Diffusion or TensorFlow, you'd have noticed that it takes a computer expert, with programming skills and knowledge of AI concepts to get it right. The ultimate goal is to simplify current systems, moving away from complex setups requiring dozens of tweakable parameters for optimal response time and results. Instead, the aim is to create straightforward setups that anyone can easily use.

There is an abundance of data science packages accessible online, with many being freely available and open-source. From a technical standpoint, these packages can indeed replicate much of what Chat with RTX is offering, particularly for those proficient in coding. So it's not like NVIDIA has invented something new and revolutionary, but the fact that they are coming out with this software confirms that they are aware of the challenges for AI on the desktop and want to address them.

Answers from generative AI can seem really convincing, even when they're completely wrong. This text-generation style that mimics human authors can make the answers appear trustworthy at first glance, even when they lack factual accuracy. The natural flow and language proficiency exhibited by these algorithms can further reinforce this impression, causing users to overlook the possibility of errors or misinformation. Consequently, there's a risk that individuals may unknowingly accept and rely on AI responses without critically evaluating their validity, inadvertently perpetuating misconceptions or false information.

This is a problem for Chat with NVIDIA RTX, too. We fed it all our news posts, thinking it would make it an expert on computer hardware. While generally the answers are right, this approach didn't work as well as expected. Some answers turned out to be wrong, even though they initially appeared correct and were written in an authoritative style.

Besides answering information-based queries, you can also get these AI models to perform text manipulation, such as getting them to summarize or rephrase things—stuff you use ChatGPT for, and are at its mercy for service availability, not to mention giving it your data. Instead, imagine you had a PC with a GeForce RTX GPU, that can be made to devour volumes of information to make sense of.

I really like what NVIDIA has done for YouTube videos because it significantly simplifies how we can consume this content. While it's not the magical solution where AI watches the video for you (it does download YouTube's text-based subtitles), it's still a step in the right direction, towards making content more accessible for everyone.

No doubt, Chat with RTX is still rough around the edges, but it really is the first attempt by NVIDIA to enter the AI PC software market, and we're sure that it will improve a lot over time.

Chat with RTX is available for download now at NVIDIA's website.
Discuss(75 Comments)
View as single page
Aug 17th, 2024 04:09 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts