artificial intelligence (AI) governance
What is artificial intelligence (AI) governance?
Artificial intelligence governance is the legal framework for ensuring AI and machine learning technologies are researched and developed with the goal of helping humanity adopt and use these systems in ethical and responsible ways. AI governance aims to close the gap that exists between accountability and ethics in technological advancement.
AI use is rapidly increasing across nearly all industries, including healthcare, transportation, retail, financial services, education and public safety. As a result, governance has taken on a significant role and is getting increased attention.
The main focus of AI governance is on AI as it relates to justice, data quality and autonomy. Overall, AI governance determines how much of daily life algorithms can shape and who monitors how AI functions. Some key areas governance addresses include the following:
- AI safety.
- Sectors appropriate for AI automation.
- Legal and institutional structures around AI use and technology.
- Control and access to personal data.
- Moral and ethical questions related to AI.
Why AI governance is needed
AI governance is necessary when machine learning algorithms are used to make decisions. Machine learning biases, particularly in terms of racial profiling, can incorrectly identify basic information about users. This can result in unfairly denying individuals access to healthcare and loans, as well as misleading law enforcement in identifying criminal suspects. AI governance determines how best to handle scenarios where AI-based decisions could be unjust or violate human rights.
This article is part of
What is enterprise AI? A complete guide for businesses
The rapid adoption of AI tools, systems and technologies in various industries raises concerns about AI ethics, transparency and compliance with other regulations, such as the General Data Protection Regulation. Without proper governance, AI systems could pose risks such as biased decision-making, privacy violations and misuse of data. AI governance seeks to facilitate constructive use of AI technologies while protecting user rights and preventing harm.
AI governance pillars
The White House Office of Science and Technology Policy has made AI policy and governance a national priority in the U.S. It has sought public input on AI risks and benefits. Previously, the executive office created an AI governance framework built on the following six pillars:
- Innovation. Facilitating efforts in business and science to harness and optimize AI's benefits.
- Trustworthy AI. Ensuring AI is transparent and doesn't violate civil liberties, the rule of law or data privacy.
- Educating and training. Encouraging the use of AI to expand opportunities and access to new jobs, industries, innovation and education.
- Infrastructure. Focusing on expanding access to data, models, computational infrastructure and other infrastructure elements.
- Applications. Expanding the application of AI technology across the public and private sectors, including transportation, education and healthcare.
- International cooperation. Promoting international collaboration and partnerships built on evidence-based approaches, analytical research and multistakeholder engagements.
Some other components of a strong AI governance framework include the following:
- Decision-making and explainability. AI systems must be designed to make fair and unbiased decisions. Explainability, or the ability to understand the reasons behind AI outcomes, is important for building trust and accountability.
- Regulatory compliance. Organizations must adhere to data privacy requirements, accuracy standards and storage restrictions to safeguard sensitive information. AI regulation helps protect user data and ensure responsible AI use.
- Risk management. AI governance ensures the responsible use of AI and effective risk management strategies, such as selecting appropriate training data sets, implementing cybersecurity measures, and addressing potential biases or errors in AI models.
- Stakeholder involvement. Engaging stakeholders, such as CEOs, data privacy officers and users, is vital for governing AI effectively. Stakeholders contribute to decision-making, provide oversight, and ensure AI technologies are developed and used responsibly over the course of their lifecycle.
How organizations should approach AI governance
There are many actions an organization can take to implement effective and sustainable AI governance practices. They include the following:
- AI culture. Everyone in an organization should feel that they have a role to play in AI governance. This process happens over time and requires training programs for employees so that a continuous learning culture can be formed around AI governance.
- Communication. In particular, employers should always communicate the risks of poorly governed AI systems with employees.
- AI governance committee. Forming an oversight and governance committee with members who have expertise in this area can ensure compliance with AI policies throughout an organization.
- Continual improvement. Collecting feedback from employees and customers using AI tools and systems enables an organization to continually improve its AI applications and products. Continually monitoring AI use and identifying and correcting problems are also important.
- Risk assessment. Third-party organizations that specialize in AI risk assessments and audits can provide an alternative perspective on how to improve AI use and governance and minimize the risks involved.
- Governance metrics. Using metrics and key performance indicators (KPIs) can validate whether an organization is adhering to AI governance policies. AI KPIs can be quantitative or qualitative, and should include those that measure provenance, veracity and quality of data, as well as data security, financial value and algorithm bias.
What is AI model governance?
AI model governance is a subset of AI governance that specifically entails how organizations should develop and use AI and machine learning models safely and responsibly. Organizations that develop and use these models must have the following considerations in mind:
- Model ownership. AI development typically involves teams of people working on a model. Tracking the work each individual team member completes is key to ensuring model success, improving collaboration and avoiding issues such as unnecessary duplications.
- Rules and regulations. Implementing a set of rules ensures aspects of model development -- such as data quality, feature engineering and documentation -- are free of errors and compliant with laws and regulations that mitigate AI-related risks.
- Data quality. Standards must be in place to ensure the quality and security of training data sets used to train AI models. Data must be accurate and unbiased so that the model learning from that data functions properly and produces the desired outputs.
- Continuous monitoring. Once a model passes into the postproduction phase, it must be continuously monitored to be sure it is working as intended. Model governance ensures there are steps in place to continuously train and monitor a model as needed.
The future of AI governance
The future of AI governance depends on collaboration among governments, organizations and stakeholders. Its success hinges on developing comprehensive AI policies and regulations that protect the public while fostering innovation. Complying with data governance rules and privacy regulations as well as prioritizing safety, trustworthiness and transparency are also important to the future of AI governance.
Various companies are focused on the future of AI governance. For instance, in 2022, Microsoft released version 2 of its "Responsible AI Standard," a guide for organizations managing AI risks and incorporating ethical AI governance into their strategies. Other companies that have committed to implementing governance standards and guardrails include Amazon, Anthropic, Google, IBM, Inflection, Meta and OpenAI.
U.S. government organizations working in this area include the White House Office of Science and Technology Policy's National Artificial Intelligence Initiative Office, which launched in 2021. The National Artificial Intelligence Advisory Committee was created in 2022 as part of the National AI Initiative to advise the president on AI-related issues. Also, in collaboration with both the public and private sector, the National Institute of Standards and Technology has developed a framework that recommends certain risk management approaches to those working with AI.
Some AI experts insist that a gap exists in the legal framework of AI accountability and integrity. In March 2023, technology leaders and AI experts such as Elon Musk and Steve Wozniak signed an open letter urging a temporary halt to AI research and the codifying of legal regulations. In May, the CEO of OpenAI, Sam Altman, testified before Congress urging AI regulation.
In October, President Joe Biden signed an executive order promoting the "safe, secure, and trustworthy development of AI." In November, the World Economic Forum hosted an AI Governance Summit in California. More than 200 leaders in the field came together to discuss and promote the democratization of responsible AI worldwide in ways that would bridge the digital divide.
AI governance is the responsible regulation of artificial intelligence in the private and public sector. Learn what businesses need to know about AI regulation.