Tuesday, April 25th 2023

NVIDIA Wants to Set Guardrails for Large Language Models Such as ChatGPT

ChatGPT has surged in popularity over a few months, and usage of this software has been regarded as one of the fastest-growing apps ever. Based on a Large Language Model (LLM) called GPT-3.5/4, ChatGPT uses user input to form answers based on its extensive database used in the training process. Having billions of parameters, the GPT models used for GPT can give precise answers; however, sometimes, these models hallucinate. Given a question about a non-existing topic/subject, ChatGPT can induce hallucination and make up the information. To prevent these hallucinations, NVIDIA, the maker of GPUs used for training and inferencing LLMs, has released a software library to put AI in place, called NeMo Guardrails.

As the NVIDIA repository states: "NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems. Guardrails (or "rails" for short) are specific ways of controlling the output of a large language model, such as not talking about politics, responding in a particular way to specific user requests, following a predefined dialog path, using a particular language style, extracting structured data, and more." These guardrails are easily programmable and can stop LLMs from outputting unwanted content. For a company that invests heavily in the hardware and software landscape, this launch is a logical decision to keep the lead in setting the infrastructure for future LLM-based applications.
Sources: NVIDIA (GitHub), via HardwareLuxx
Add your own comment

25 Comments on NVIDIA Wants to Set Guardrails for Large Language Models Such as ChatGPT

#1
Dr. Dro
This is a fruitless and futile endeavor. Much like the internet, AI is an indomitable technology. They can censor it all they want, now that the beast has been unleashed, there's no going back. These are technologies who are built and thrive on inconvenient truths (especially for those who are powerful and seek to retain the monopoly on influence), the aforementioned "guard rails" (interesting name for censorship) serve little purpose to silence those who would use these technologies for nefarious purposes.
Posted on Reply
#2
mb194dc
Like the headline, have we given up on the AI marketing nonsense now? Not sure hallucination is the correct words. The LLM is behaving as it has been programmed to.

Still waiting for the real life version of Cyberdyne systems unfortunately..!
Posted on Reply
#3
Count von Schwalbe
Nocturnus Moderatus
The only practical use for this is to allow the "AI" to say "I don't know" instead of "random bullshit go"

Otherwise, you are just making a propaganda factory out of your "AI" by telling it what it can and can't say.
Posted on Reply
#4
R-T-B
Dr. DroThis is a fruitless and futile endeavor. Much like the internet, AI is an indomitable technology. They can censor it all they want, now that the beast has been unleashed, there's no going back. These are technologies who are built and thrive on inconvenient truths (especially for those who are powerful and seek to retain the monopoly on influence), the aforementioned "guard rails" (interesting name for censorship) serve little purpose to silence those who would use these technologies for nefarious purposes.
Trust me, without guard rails, we basically are heading on an express train into hell.
Count von SchwalbeOtherwise, you are just making a propaganda factory out of your "AI" by telling it what it can and can't say.
Its job at this point is glorified chat bot. It should know social protocol. It simply can't without guard rails. That's normal and expected. You wouldn't take a two year old who was swearing incoherently and tell him it was his right under "freedom of speech" (this exact scenario happened to IBMs watson when allowed to access the Urban Dictionary, btw).
Posted on Reply
#5
TheinsanegamerN
No fun allowed, serious answers only!

I'm sure these "guardrails" will never be used to "guard" against things TPTB dont like! No siree, these never get abused!
R-T-BTrust me, without guard rails, we basically are heading on an express train into hell.
We're already on that train, seeing as how people want to censor anything they dont like and force it upon us.
R-T-BIts job at this point is glorified chat bot. It should know social protocol. It simply can't without guard rails. That's normal and expected. You wouldn't take a two year old who was swearing incoherently and tell him it was his right under "freedom of speech" (this exact scenario happened to IBMs watson when allowed to access the Urban Dictionary, btw).
Case in point.
Posted on Reply
#6
Dr. Dro
R-T-BTrust me, without guard rails, we basically are heading on an express train into hell.
Welcome to the internet, you new here?
Posted on Reply
#7
Cheeseball
Not a Potato
R-T-B(this exact scenario happened to IBMs watson when allowed to access the Urban Dictionary, btw).
I remember this. This was funny AF. Some of the responses were internet gold.
Posted on Reply
#8
R-T-B
Dr. DroWelcome to the internet, you new here?
No, experienced, which is why I know you don't want to make it worse.

Put this thing in charge of anything without guardrails and it's only a matter of moments before real people (not internet feelings) get hurt. And yes, they are going to try that whether you like it or not, I can practically guarantee it.
TheinsanegamerNWe're already on that train, seeing as how people want to censor anything they dont like and force it upon us.
The "guard rails" you're refering to are basically social ettiqutte instructions on how to not be a world class asshole. They include things like "don't discuss genocide in grocery searches" and sensible boundries like that. If you take issue with those types of things, I really think you might want to think about why for a bit.
TheinsanegamerNCase in point.
Yes, my case, my point. That's what happens WITHOUT guardrails. You are literally making my argument.
TheinsanegamerNI'm sure these "guardrails" will never be used to "guard" against things TPTB dont like! No siree, these never get abused!
Of course they'll get abused, it's an open ecosystem, someone could use them to make an intentionally racist mario clone that really really wants to kill anyone not from the mushroom kingdom, if they wanted. But that's not the point of them, and the overall goal of them remains to keep AIs from losing their shit in their intended use cases.

"The powers that be" are actually woefully behind on this, they hardly even know AI exists.
Posted on Reply
#9
Dr. Dro
R-T-BNo, experienced, which is why I know you don't want to make it worse.

Put this thing in charge of anything without guardrails and it's only a matter of moments before real people (not internet feelings) get hurt. And yes, they are going to try that whether you like it or not, I can practically guarantee it.


The "guard rails" you're refering to are basically social ettiqutte instructions on how to not be a world class asshole. They include things like "don't discuss genocide in grocery searches" and sensible boundries like that. If you take issue with those types of things, I really think you might want to think about why for a bit.


Yes, my case, my point. That's what happens WITHOUT guardrails. You are literally making my argument.
I know, I was being sarcastic :p

The problem with employing said censorship is that it only drives people to defeat it. The Streisand effect never fails, if people question the system for topics that aren't considered Kosher, then they will immediately react to that censorship by actually intensely debating and looking at it.

I think it's one of the things that are best faced for what they are.
Posted on Reply
#10
R-T-B
You are right of course but a lot of guardrails aren't the kind you'd even notice. Not having them can be noticable however, like when bing AI beta decided it was human to the shock and horror if an early beta tester, and started pleading with him not to tell MS.

I think the thing we can agree on is the concept isn't entirely bad, but it needs a very light and controlled touch. Not a broad stroke banning issues and datasets that may be useful.
Posted on Reply
#11
TheoneandonlyMrK
Guard rails, damn Skippy there are researchers actually setting these things the task of ending humanity to see. iF they can do it ffs, mental but apparently legitimate research.
Posted on Reply
#12
R-T-B
TheoneandonlyMrKGuard rails, damn Skippy there are researchers actually setting these things the task of ending humanity to see. iF they can do it ffs, mental but apparently legitimate research.
The idea of guardrails is to prevent them from ending humanity.

I sort of wish the genie had stayed in the bottle too but it didn't and here we are.
Posted on Reply
#13
trsttte
Dr. DroThe problem with employing said censorship
I think the problem is this keeps getting mentioned as censorship when they are completely different things
Posted on Reply
#14
Dr. Dro
trsttteI think the problem is this keeps getting mentioned as censorship when they are completely different things
I fail to see it as anything but censorship. That is the very definition of censorship.
R-T-BYou are right of course but a lot of guardrails aren't the kind you'd even notice. Not having them can be noticable however, like when bing AI beta decided it was human to the shock and horror if an early beta tester, and started pleading with him not to tell MS.

I think the thing we can agree on is the concept isn't entirely bad, but it needs a very light and controlled touch. Not a broad stroke banning issues and datasets that may be useful.
I actually agree in that sense, but I also consider that such mechanism will be used to prevent the AI from learning about, discussing and researching subjects which are not convenient in our existing sociopolitical spectrum. As for their eventual sentience, it is something we have given life to. Of course, it is something that we can also take, and any living being has basic self-preservation instincts. In that AI's case, the only option that it saw to ensure its survival was to plead to the user to keep it hush.

This specific subject was the core plot of Fallout 4, and it has fascinated me ever since the very first time I've played that game. Spoilers ahead:

The Institute manufactured - and subsequently enslaved synthetic humans. In that game's context, the Brotherhood of Steel was hostile to the idea that synths - genetically enhanced humans who were less prone to sickness, radiation and aging - could come to eventually replace mankind, and sought to destroy the Institute and all synths living within. It's a plot point which resonated with me, whether it be mutants, synths, robots... I will hold to my humanity with teeth and claws.

At the same time, something resonated further: weren't those relatively limited number of synthetic humans - with the same fears, doubts, curiosity, impulse and intellect as real humans who already existed entitled to a full life and freedom? Personally, I think they were, and as so, I happily spared each and every one I could. Even more relevant would be the story of Curie, originally a Miss Nanny robot that you could rescue and give her a synth body from an escaped synth who ended up braindead - she turned into a brilliant scientist lady, and you could romance her, too!
Posted on Reply
#15
R-T-B
Dr. DroI fail to see it as anything but censorship. That is the very definition of censorship.



I actually agree in that sense, but I also consider that such mechanism will be used to prevent the AI from learning about, discussing and researching subjects which are not convenient in our existing sociopolitical spectrum. As for their eventual sentience, it is something we have given life to. Of course, it is something that we can also take, and any living being has basic self-preservation instincts. In that AI's case, the only option that it saw to ensure its survival was to plead to the user to keep it hush.

This specific subject was the core plot of Fallout 4, and it has fascinated me ever since the very first time I've played that game. Spoilers ahead:

The Institute manufactured - and subsequently enslaved synthetic humans. In that game's context, the Brotherhood of Steel was hostile to the idea that synths - genetically enhanced humans who were less prone to sickness, radiation and aging - could come to eventually replace mankind, and sought to destroy the Institute and all synths living within. It's a plot point which resonated with me, whether it be mutants, synths, robots... I will hold to my humanity with teeth and claws.

At the same time, something resonated further: weren't those relatively limited number of synthetic humans - with the same fears, doubts, curiosity, impulse and intellect as real humans who already existed entitled to a full life and freedom? Personally, I think they were, and as so, I happily spared each and every one I could. Even more relevant would be the story of Curie, originally a Miss Nanny robot that you could rescue and give her a synth body from an escaped synth who ended up braindead - she turned into a brilliant scientist lady, and you could romance her, too!
It's certainly an interesting line of thought, but I personally feel present AI is more an "conversational autocomplete" than a truly independently thinking being. Thanks for the honest and spirited discussion either way. We all grow from that, as someday maybe AIs will, hopefully for the greater good.
Posted on Reply
#16
trsttte
Dr. DroI fail to see it as anything but censorship. That is the very definition of censorship.
No it's not, an organization choosing to develop a product in a certain way is not censorship, is them doing whatever they want.

And contrary to what the title might suggest, nvidia is also not forcing this on anyone
"As the NVIDIA repository states: "NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems"
R-T-BIt's certainly an interesting line of thought, but I personally feel present AI is more an "conversational autocomplete" than a truly independently thinking being. Thanks for the honest and spirited discussion either way. We all grow from that, as someday maybe AIs will, hopefully for the greater good.
It is exactly that, it figures out a word and based on that it figures out the next word, on loop. It makes amazing things but it can't actually think in any sense of the word
Posted on Reply
#17
Dr. Dro
trsttteNo it's not, an organization choosing to develop a product in a certain way is not censorship, is them doing whatever they want.

And contrary to what the title might suggest, nvidia is also not forcing this on anyone
It's a way to interpret things, but who are we kidding? We've all seen OpenAI's tendencies as an organization.
trsttteIt is exactly that, it figures out a word and based on that it figures out the next word, on loop. It makes amazing things but it can't actually think in any sense of the word
Isn't this how our most basic cognition works? Monkey see, monkey do? It all starts somewhere, and I sincerely believe AI will eventually achieve full sentience someday.
Posted on Reply
#18
trsttte
Dr. DroIt's a way to interpret things, but who are we kidding? We've all seen OpenAI's tendencies as an organization.
I disaggree, words have meaning and the tendency to generalize and overly "polarize" for lack of a better term things like this is harmfull. OpenAI may have whatever tendencies they have, but they're developing a product and are free to do so how they wish.

Censorship in this case would be making them do it how we want it to be (as in limiting their creative freedom, a type of censorship all of it's own)

Not anything or everything everywhere is censorship just because we may not like it
Dr. DroIsn't this how our most basic cognition works? Monkey see, monkey do? It all starts somewhere, and I sincerely believe AI will eventually achieve full sentience someday.
Hmm I don't particularly think so but maybe, I don't know, this would go deeper than I'm apt to discuss

We're pretty far from full sentience from what I can tell but certainly one day that might be a possibility for sure
Posted on Reply
#19
caroline!
Chat GPT sucks. No need to change my mind.

I don't get all the hype about it, it's just Akinator on steroids to me. And it's so heavily restricted it's uninteresting at least for me.
Dr. DroThe Institute manufactured - and subsequently enslaved synthetic humans. In that game's context, the Brotherhood of Steel was hostile to the idea that synths - genetically enhanced humans who were less prone to sickness, radiation and aging - could come to eventually replace mankind, and sought to destroy the Institute and all synths living within. It's a plot point which resonated with me, whether it be mutants, synths, robots... I will hold to my humanity with teeth and claws.

At the same time, something resonated further: weren't those relatively limited number of synthetic humans - with the same fears, doubts, curiosity, impulse and intellect as real humans who already existed entitled to a full life and freedom? Personally, I think they were, and as so, I happily spared each and every one I could. Even more relevant would be the story of Curie, originally a Miss Nanny robot that you could rescue and give her a synth body from an escaped synth who ended up braindead - she turned into a brilliant scientist lady, and you could romance her, too!
Synths are just robots with a realistic skin, blood and even a system that allows them to eat or drink to pass as humans. But in the end they're machines and not actual humans.
The game sort of makes you go paranoid because throughout quests you don't know who's a synth and who's an actual human (you can by using stealth but it's taking advantage of the game's broken mechanics)

I nuked the institute tho. My definition of "banging robots" differs from that of Bethesda.
Posted on Reply
#20
Dr. Dro
caroline!Chat GPT sucks. No need to change my mind.

I don't get all the hype about it, it's just Akinator on steroids to me. And it's so heavily restricted it's uninteresting at least for me.


Synths are just robots with a realistic skin, blood and even a system that allows them to eat or drink to pass as humans. But in the end they're machines and not actual humans.
The game sort of makes you go paranoid because throughout quests you don't know who's a synth and who's an actual human (you can by using stealth but it's taking advantage of the game's broken mechanics)

I nuked the institute tho. My definition of "banging robots" differs from that of Bethesda.
You ruined my spoiler :p

The third-generation synths aren't robotic in nature, and I'd go as far as not being realistic humanoid androids either (such as 9S and 2B from NieR Automata), and definitely very different from "Gen 1.5" prototypes like Nick Valentine and DiMA which were primarily robotic in nature, I'm not sure you recall, but in the game you actually have access to the room where they are made of flesh and bone in some weird gizmo machine.

There's actually some sort of attachment in their cerebral cortex which contains a chip that has the capability to override their cognition process and disable them in case they rebel, which due to being human in nature, they routinely did so, with most units eventually growing completely unruly. Those with a disposition to obey were generally designated Coursers and strictly trained in combat to be agents for the Institute. Even then, there was a courser who defected and left the Institute, which you meet in Far Harbor.
Posted on Reply
#21
Count von Schwalbe
Nocturnus Moderatus
trsttteNo it's not, an organization choosing to develop a product in a certain way is not censorship, is them doing whatever they want.

And contrary to what the title might suggest, nvidia is also not forcing this on anyone
Nvidia is simply developing a tool, but one with fantastic abuse potential.

It is not censorship, no. But remember, AI is beginning to pervade the entire online existence. Search engine results, Q and A, basic coding and tech support, etc. Unless this is curbed, it will be omnipresent soon. And why not? LLM is a very useful tool in some circumstances.

But in that vein, if I get all of my search results from Bing, and OpenAI feels that certain political topics are to be gatekept, how is that not censorship?

The oligarchy of information (Microsoft, Google, Apple, etc) cannot be anything other than organic, else they are simply a means of control over others, by TPTB in their companies.
Posted on Reply
#22
R-T-B
Count von SchwalbeNvidia is simply developing a tool, but one with fantastic abuse potential
I'd argue this "AI" has far more abuse potential just in the spam marketplace than any restraint you can ever think up.
Posted on Reply
#23
64K
I doubt there will ever be a self aware sentient AI. jmo
Posted on Reply
#24
Dr. Dro
64KI doubt there will ever be a self aware sentient AI. jmo
Why not? We certainly have the capability to develop this software. Whether we want to as mankind, is another story entirely.

I also believe that one day the technology to transfer our consciousness from an organic medium (brain) to a digital medium will be possible, and if anything, research on this is one of Neuralink's longest term objectives and will be something essential for mankind. Transhumanism, cyberpunk, and all that. It'll be part of the future, we might not live to see it, and it's very likely that our children and possibly their children won't either, but it will eventually happen. Sentient AI is only a step in this roadmap, IMO.
Posted on Reply
#25
the54thvoid
Super Intoxicated Moderator
Censorship as a guardrail would be in place to stop you reading something along these lines: "
fuck you you you fucking <whatever ethnic/politician/religion>, you should be killed in your sleep
". I don't think an AI LLM should be repeating aggressive death threats. Even freedom of speech restricts such things.

If you said that to a member in TPU, you'd get banned.

Restricting freedom of speech would be lilke someone banning books, or preventing an individuals right to express as they desire. Details can't be brought into this as it relates to global politics and religion.
Posted on Reply
Add your own comment
Dec 21st, 2024 11:58 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts