Tuesday, April 25th 2023
NVIDIA Wants to Set Guardrails for Large Language Models Such as ChatGPT
ChatGPT has surged in popularity over a few months, and usage of this software has been regarded as one of the fastest-growing apps ever. Based on a Large Language Model (LLM) called GPT-3.5/4, ChatGPT uses user input to form answers based on its extensive database used in the training process. Having billions of parameters, the GPT models used for GPT can give precise answers; however, sometimes, these models hallucinate. Given a question about a non-existing topic/subject, ChatGPT can induce hallucination and make up the information. To prevent these hallucinations, NVIDIA, the maker of GPUs used for training and inferencing LLMs, has released a software library to put AI in place, called NeMo Guardrails.
As the NVIDIA repository states: "NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems. Guardrails (or "rails" for short) are specific ways of controlling the output of a large language model, such as not talking about politics, responding in a particular way to specific user requests, following a predefined dialog path, using a particular language style, extracting structured data, and more." These guardrails are easily programmable and can stop LLMs from outputting unwanted content. For a company that invests heavily in the hardware and software landscape, this launch is a logical decision to keep the lead in setting the infrastructure for future LLM-based applications.
Sources:
NVIDIA (GitHub), via HardwareLuxx
As the NVIDIA repository states: "NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems. Guardrails (or "rails" for short) are specific ways of controlling the output of a large language model, such as not talking about politics, responding in a particular way to specific user requests, following a predefined dialog path, using a particular language style, extracting structured data, and more." These guardrails are easily programmable and can stop LLMs from outputting unwanted content. For a company that invests heavily in the hardware and software landscape, this launch is a logical decision to keep the lead in setting the infrastructure for future LLM-based applications.
25 Comments on NVIDIA Wants to Set Guardrails for Large Language Models Such as ChatGPT
Still waiting for the real life version of Cyberdyne systems unfortunately..!
Otherwise, you are just making a propaganda factory out of your "AI" by telling it what it can and can't say.
I'm sure these "guardrails" will never be used to "guard" against things TPTB dont like! No siree, these never get abused! We're already on that train, seeing as how people want to censor anything they dont like and force it upon us. Case in point.
Put this thing in charge of anything without guardrails and it's only a matter of moments before real people (not internet feelings) get hurt. And yes, they are going to try that whether you like it or not, I can practically guarantee it. The "guard rails" you're refering to are basically social ettiqutte instructions on how to not be a world class asshole. They include things like "don't discuss genocide in grocery searches" and sensible boundries like that. If you take issue with those types of things, I really think you might want to think about why for a bit. Yes, my case, my point. That's what happens WITHOUT guardrails. You are literally making my argument. Of course they'll get abused, it's an open ecosystem, someone could use them to make an intentionally racist mario clone that really really wants to kill anyone not from the mushroom kingdom, if they wanted. But that's not the point of them, and the overall goal of them remains to keep AIs from losing their shit in their intended use cases.
"The powers that be" are actually woefully behind on this, they hardly even know AI exists.
The problem with employing said censorship is that it only drives people to defeat it. The Streisand effect never fails, if people question the system for topics that aren't considered Kosher, then they will immediately react to that censorship by actually intensely debating and looking at it.
I think it's one of the things that are best faced for what they are.
I think the thing we can agree on is the concept isn't entirely bad, but it needs a very light and controlled touch. Not a broad stroke banning issues and datasets that may be useful.
I sort of wish the genie had stayed in the bottle too but it didn't and here we are.
This specific subject was the core plot of Fallout 4, and it has fascinated me ever since the very first time I've played that game. Spoilers ahead:
And contrary to what the title might suggest, nvidia is also not forcing this on anyone It is exactly that, it figures out a word and based on that it figures out the next word, on loop. It makes amazing things but it can't actually think in any sense of the word
Censorship in this case would be making them do it how we want it to be (as in limiting their creative freedom, a type of censorship all of it's own)
Not anything or everything everywhere is censorship just because we may not like it Hmm I don't particularly think so but maybe, I don't know, this would go deeper than I'm apt to discuss
We're pretty far from full sentience from what I can tell but certainly one day that might be a possibility for sure
I don't get all the hype about it, it's just Akinator on steroids to me. And it's so heavily restricted it's uninteresting at least for me. Synths are just robots with a realistic skin, blood and even a system that allows them to eat or drink to pass as humans. But in the end they're machines and not actual humans.
The game sort of makes you go paranoid because throughout quests you don't know who's a synth and who's an actual human (you can by using stealth but it's taking advantage of the game's broken mechanics)
I nuked the institute tho. My definition of "banging robots" differs from that of Bethesda.
It is not censorship, no. But remember, AI is beginning to pervade the entire online existence. Search engine results, Q and A, basic coding and tech support, etc. Unless this is curbed, it will be omnipresent soon. And why not? LLM is a very useful tool in some circumstances.
But in that vein, if I get all of my search results from Bing, and OpenAI feels that certain political topics are to be gatekept, how is that not censorship?
The oligarchy of information (Microsoft, Google, Apple, etc) cannot be anything other than organic, else they are simply a means of control over others, by TPTB in their companies.
I also believe that one day the technology to transfer our consciousness from an organic medium (brain) to a digital medium will be possible, and if anything, research on this is one of Neuralink's longest term objectives and will be something essential for mankind. Transhumanism, cyberpunk, and all that. It'll be part of the future, we might not live to see it, and it's very likely that our children and possibly their children won't either, but it will eventually happen. Sentient AI is only a step in this roadmap, IMO.
If you said that to a member in TPU, you'd get banned.
Restricting freedom of speech would be lilke someone banning books, or preventing an individuals right to express as they desire. Details can't be brought into this as it relates to global politics and religion.