Getty Images
Nvidia NeMo Guardrails addresses trust concerns with AI bots
The toolkit enables enterprises to control the way large language models such as ChatGPT react to certain inquiries. It targets concerns enterprises have about trust and safety.
Nvidia on Tuesday introduced Nvidia NeMo Guardrails, a toolkit aimed at enabling large enterprises to control large language models and generative AI systems such as ChatGPT to make them safer and more trustworthy.
Part of the Nvidia AI platform, Nvidia NeMo is a cloud-native enterprise framework that enterprises can use to build, customize and deploy generative AI models.
NeMo Guardrails is an open source software toolkit that sits between the user and a large language model (LLM) application. It enables enterprises to develop safe and trustworthy LLM conversational systems, according to Nvidia. Guardrails works with all LLMs, including ChatGPT from Microsoft partner OpenAI.
The toolkit can easily be integrated with community toolkits such as LangChain or Zapier. LangChain is a framework on GitHub built around LLM applications and can be used for summarization, chatbots and generative question-answering.
Guardrails is built on Colang, a modeling language. Colang provides a readable and extensible interface for users to control the behavior of their AI bots with natural language.
NeMo Guardrails users can add programmable rules to their AI chatbots to define user interaction and guard the conversation between a human and a generative AI system.
Lowering anxiety
Guardrails will help reduce some concerns about AI chatbots, said Cambrian AI founder and analyst Karl Freund.
"This will lower the anxiety level, help give [enterprises] the tools that they can customize to meet their business needs, and provide the security and the topical guidance, and reduce chance of hallucination of large language models," Freund said.
Generative AI "hallucinations" include when the systems combine ideas and words that shouldn't go together and don't make sense, or when the chatbots invent things -- such as books or references to authoritative academic sources -- that don't exist.
Karl FreundFounder and analyst, Cambrian AI
NeMo Guardrails is also a way for Nvidia to make it easier for enterprises to start using generative AI because the technology targets the issue of trust, Freund added. Many enterprises do not trust AI chatbots such as ChatGPT and therefore are unwilling to deploy their own generative AI systems.
"This is going to lower the temperature and give people the confidence they need that they can control this monster we've created, which is GPT-4," Freund said, referring to the latest iteration of OpenAI's dominant generative AI text system.
While using an LLM to balance or check LLMs might seem strange, it's inevitable, said Opus Research analyst Dan Miller.
The new Nvidia generative AI safety system reflects concerns about the balance between humans and AI, he said.
"We humans don't want to be totally replaced," Miller said. "But if we can, we should recognize that part of our responsibility is to establish those guardrails or put limits on what the output [of AI] is going to be. Using an AI can make us more efficient at doing that."
Moreover, using an LLM to guard another LLM is the sensible thing to do, Freund said.
"You're not going to solve this problem by writing C code," he said, referring to the still-used 1970s-era programming language. "You're going to solve this problem by tapping into the power of an LLM to help constrain what an LLM might say to [an inquiry]."
Regulation in the industry
Moreover, while NeMo Guardrails is not a perfect response to the ethical problems posed by LLMs, it is an example of the industry regulating itself at a moment when federal government and global regulation efforts are sparse, said Dan Newman, an analyst at Futurum Research.
"While we will want to see meaningful policy on a global scale, the industry is going to have to be self-regulated for some period of time," Newman said. The self-regulation might not be optimal, but vendors such as Nvidia, Microsoft and Google are the ones that will lead self-regulation campaigns, he added.
"It's time to try to create some guardrails that can at least allow us to keep innovating, because you can't put the genie back in the bottle," Newman said. "But at the same time, [we can] try to prevent the malicious efforts by those who are going to attempt to make the worst out of the innovation."
NeMo Guardrails is available now as an open source toolkit on GitHub.
Esther Ajao is a news writer covering artificial intelligence software and systems.