Monday, February 19th 2024
NVIDIA Joins US Artificial Intelligence Safety Institute Consortium
NVIDIA has joined the National Institute of Standards and Technology's new U.S. Artificial Intelligence Safety Institute Consortium as part of the company's effort to advance safe, secure and trustworthy AI. AISIC will work to create tools, methodologies and standards to promote the safe and trustworthy development and deployment of AI. As a member, NVIDIA will work with NIST—an agency of the U.S. Department of Commerce—and fellow consortium members to advance the consortium's mandate. NVIDIA's participation builds on a record of working with governments, researchers and industries of all sizes to help ensure AI is developed and deployed safely and responsibly.
Through a broad range of development initiatives, including NeMo Guardrails, open-source software for ensuring large language model responses are accurate, appropriate, on topic and secure, NVIDIA actively works to make AI safety a reality. In 2023, NVIDIA endorsed the Biden Administration's voluntary AI safety commitments. Last month, the company announced a $30 million contribution to the U.S. National Science Foundation's National Artificial Intelligence Research Resource pilot program, which aims to broaden access to the tools needed to power responsible AI discovery and innovation.AISIC Research Focus
Through the consortium, NIST aims to facilitate knowledge sharing and advance applied research and evaluation activities to accelerate innovation in trustworthy AI. AISIC members, which include more than 200 of the nation's leading AI creators, academics, government and industry researchers, as well as civil society organizations, bring technical expertise in areas such as AI governance, systems and development, psychometrics and more.
In addition to participating in working groups, NVIDIA plans to leverage a range of computing resources and best practices for implementing AI risk-management frameworks and AI model transparency, as well as several NVIDIA-developed, open-source AI safety, red-teaming and security tools.
Learn more about NVIDIA's guiding principles for trustworthy AI.
Sources:
NVIDIA, NIST Gov
Through a broad range of development initiatives, including NeMo Guardrails, open-source software for ensuring large language model responses are accurate, appropriate, on topic and secure, NVIDIA actively works to make AI safety a reality. In 2023, NVIDIA endorsed the Biden Administration's voluntary AI safety commitments. Last month, the company announced a $30 million contribution to the U.S. National Science Foundation's National Artificial Intelligence Research Resource pilot program, which aims to broaden access to the tools needed to power responsible AI discovery and innovation.AISIC Research Focus
Through the consortium, NIST aims to facilitate knowledge sharing and advance applied research and evaluation activities to accelerate innovation in trustworthy AI. AISIC members, which include more than 200 of the nation's leading AI creators, academics, government and industry researchers, as well as civil society organizations, bring technical expertise in areas such as AI governance, systems and development, psychometrics and more.
In addition to participating in working groups, NVIDIA plans to leverage a range of computing resources and best practices for implementing AI risk-management frameworks and AI model transparency, as well as several NVIDIA-developed, open-source AI safety, red-teaming and security tools.
Learn more about NVIDIA's guiding principles for trustworthy AI.
9 Comments on NVIDIA Joins US Artificial Intelligence Safety Institute Consortium
Google and OpenAI are the members of the club, this is the bs you get when you ask an obvious and simple question. Safe, trustworthy AI my ass.
"Damn that was fast"!
My point is you can make up all the regs you want.
If someone decides to use an AI to find ways around all the regs it will do it and do it quickly too.
Safeties off: higher short term profits, humanity wiped out by the machines.
Hmm... They're trained off of all our crap, and we're flawed as it is. I wouldn't trust me neither! XD
The answers are reflections of what was typed in. Unfortunately, too many people type in socially ambiguous, morally slanted, or politically opinionated questions -- and LLM's are NOT designed to answer those philosophical questions. Even if you asked humans - you'd get vastly differing answers.
People need to stop using AI LLM's to 'prove' false points about the systems bias. The bias is clearly already in the expectation of the person asking (what is in effect a clever encyclopedia) a question with an answer they already expect.