Monday, February 19th 2024

NVIDIA Joins US Artificial Intelligence Safety Institute Consortium

NVIDIA has joined the National Institute of Standards and Technology's new U.S. Artificial Intelligence Safety Institute Consortium as part of the company's effort to advance safe, secure and trustworthy AI. AISIC will work to create tools, methodologies and standards to promote the safe and trustworthy development and deployment of AI. As a member, NVIDIA will work with NIST—an agency of the U.S. Department of Commerce—and fellow consortium members to advance the consortium's mandate. NVIDIA's participation builds on a record of working with governments, researchers and industries of all sizes to help ensure AI is developed and deployed safely and responsibly.

Through a broad range of development initiatives, including NeMo Guardrails, open-source software for ensuring large language model responses are accurate, appropriate, on topic and secure, NVIDIA actively works to make AI safety a reality. In 2023, NVIDIA endorsed the Biden Administration's voluntary AI safety commitments. Last month, the company announced a $30 million contribution to the U.S. National Science Foundation's National Artificial Intelligence Research Resource pilot program, which aims to broaden access to the tools needed to power responsible AI discovery and innovation.
AISIC Research Focus
Through the consortium, NIST aims to facilitate knowledge sharing and advance applied research and evaluation activities to accelerate innovation in trustworthy AI. AISIC members, which include more than 200 of the nation's leading AI creators, academics, government and industry researchers, as well as civil society organizations, bring technical expertise in areas such as AI governance, systems and development, psychometrics and more.

In addition to participating in working groups, NVIDIA plans to leverage a range of computing resources and best practices for implementing AI risk-management frameworks and AI model transparency, as well as several NVIDIA-developed, open-source AI safety, red-teaming and security tools.

Learn more about NVIDIA's guiding principles for trustworthy AI.
Sources: NVIDIA, NIST Gov
Add your own comment

9 Comments on NVIDIA Joins US Artificial Intelligence Safety Institute Consortium

#1
Suspecto
"Through a broad range of development initiatives, including NeMo Guardrails, open-source software for ensuring large language model responses are accurate, appropriate, on topic and secure, NVIDIA actively works to make AI safety a reality. In 2023, NVIDIA endorsed the Biden Administration's voluntary AI safety commitments."

Google and OpenAI are the members of the club, this is the bs you get when you ask an obvious and simple question. Safe, trustworthy AI my ass.
Posted on Reply
#2
SOAREVERSOR
Suspecto"Through a broad range of development initiatives, including NeMo Guardrails, open-source software for ensuring large language model responses are accurate, appropriate, on topic and secure, NVIDIA actively works to make AI safety a reality. In 2023, NVIDIA endorsed the Biden Administration's voluntary AI safety commitments."

Google and OpenAI are the members of the club, this is the bs you get when you ask an obvious and simple question. Safe, trustworthy AI my ass.
There's no way to get a handle on this without massive regulation.
Posted on Reply
#3
Random_User
T0@st...safe, secure and trustworthy AI...
Hmm...? :confused: These words don't belong to the same sentence, if AI is in it. Like how it adds "credibility" to all the content available on the web already.
Posted on Reply
#4
Icon Charlie
Remember... The Moar You Buy... The Moar You Save!!!:peace:
Posted on Reply
#5
Bones
SOAREVERSORThere's no way to get a handle on this without massive regulation.
Don't worry, they'll get those AI's right on figuring out the legal loopholes they can.......?
"Damn that was fast"!

My point is you can make up all the regs you want.
If someone decides to use an AI to find ways around all the regs it will do it and do it quickly too.
Posted on Reply
#7
Nihillim
Safeties on: lower short term profits, still alive to operate businesses in the long run.
Safeties off: higher short term profits, humanity wiped out by the machines.
Hmm...
Random_UserHmm...? :confused: These words don't belong to the same sentence, if AI is in it. Like how it adds "credibility" to all the content available on the web already.
They're trained off of all our crap, and we're flawed as it is. I wouldn't trust me neither! XD
Posted on Reply
#8
ty_ger
SuspectoGoogle and OpenAI are the members of the club, this is the bs you get when you ask an obvious and simple question. Safe, trustworthy AI my ass.
What is the obvious and simple answer you wish you saw instead? I am puzzled. The answer I just read seemed like the most obvious answer I expected to read.
Posted on Reply
#9
the54thvoid
Super Intoxicated Moderator
Suspecto"Through a broad range of development initiatives, including NeMo Guardrails, open-source software for ensuring large language model responses are accurate, appropriate, on topic and secure, NVIDIA actively works to make AI safety a reality. In 2023, NVIDIA endorsed the Biden Administration's voluntary AI safety commitments."

Google and OpenAI are the members of the club, this is the bs you get when you ask an obvious and simple question. Safe, trustworthy AI my ass.
The answers given are accurate. It's human bias that makes it unpalatable. I asked ChatGPT if human behaviour affects socialism and capitalism. It explained the great benefit of entrepeneurship and innovation for captialism, but drawbacks were short termism and unethical business practice. Socialism's positive point was based on equity and cooperation, but its drawback was clearly lack of motivation to work (hard) for perceived equity. Technically, neither system actually exists on it's own.

The answers are reflections of what was typed in. Unfortunately, too many people type in socially ambiguous, morally slanted, or politically opinionated questions -- and LLM's are NOT designed to answer those philosophical questions. Even if you asked humans - you'd get vastly differing answers.

People need to stop using AI LLM's to 'prove' false points about the systems bias. The bias is clearly already in the expectation of the person asking (what is in effect a clever encyclopedia) a question with an answer they already expect.
Posted on Reply
Dec 18th, 2024 01:12 EST change timezone

New Forum Posts

Popular Reviews

Controversial News Posts