10 top resources to build an ethical AI framework
Several standards, tools and techniques are available to help navigate the nuances and complexities in establishing a generative AI ethics framework that supports responsible AI.
As generative AI gains a stronger foothold in the enterprise, executives are called upon to bring greater attention to AI ethics -- a big challenge since many issues relate to bias, transparency, explainability and trust. To illuminate the various nuances of ethical AI, government agencies, regulators and independent groups are developing ethical AI frameworks, tools and resources.
"The most impactful frameworks or approaches to addressing ethical AI issues … take all aspects of the technology -- its usage, risks and potential outcomes -- into consideration," said Tad Roselund, managing director and senior partner at Boston Consulting Group (BCG). Many firms approach the development of ethical AI frameworks from purely a values-based position, he added. It's important to take a holistic ethical AI approach that integrates strategy with process and technical controls, cultural norms and governance. These three elements of an ethical AI framework can help institute responsible AI policies and initiatives. And it all starts by establishing a set of principles around AI usage.
"Oftentimes, businesses and leaders are narrowly focused on one of these elements when they need to focus on all them," Roselund reasoned. Addressing any one element may be a good starting point, but by considering all three elements -- controls, cultural norms and governance -- businesses can devise an all-encompassing ethical AI framework. This approach is especially important when it comes to generative AI and its ability to democratize the use of AI.
Enterprises must also instill AI ethics into those who develop and use AI tools and technologies. Open communication, educational resources, and enforced guidelines and processes to ensure the proper use of AI, Roselund advised, can further bolster an internal AI ethics framework that addresses generative AI.
This article is part of
What is enterprise AI? A complete guide for businesses
Top resources to shape an ethical AI framework
There are several standards, tools, techniques and other resources to help shape a company's internal ethical AI framework. The following are listed alphabetically:
- AI Now Institute focuses on the social implications of AI and policy research in responsible AI. Research areas include algorithmic accountability, antitrust concerns, biometrics, worker data rights, large-scale AI models and privacy. The report "AI Now 2023 Landscape: Confronting Tech Power" provides a deep dive into many ethical issues that can be helpful in developing a responsible AI policy.
- Berkman Klein Center for Internet & Society at Harvard University fosters research into the big questions related to the ethics and governance of AI. It has contributed to the dialogue about information quality, influenced policymaking on algorithms in criminal justice, supported the development of AI governance frameworks, studied algorithmic accountability and collaborated with AI vendors.
- CEN-CENELEC Joint Technical Committee on Artificial Intelligence (JTC 21) is an ongoing EU initiative for various responsible AI standards. The group plans to produce standards for the European market and inform EU legislation, policies and values. It also plans to specify technical requirements for characterizing transparency, robustness and accuracy in AI systems.
- Institute for Technology, Ethics and Culture (ITEC) Handbook was a collaborative effort between Santa Clara University's Markkula Center for Applied Ethics and the Vatican to develop a practical, incremental roadmap for technology ethics. The handbook includes a five-stage maturity model, with specific measurable steps that enterprises can take at each level of maturity. It also promotes an operational approach for implementing ethics as an ongoing practice, akin to DevSecOps for ethics. The core idea is to bring legal, technical and business teams together during ethical AI's early stages to root out the bugs at a time when they're much cheaper to fix than after responsible AI deployment.
- ISO/IEC 23894:2023 IT-AI-Guidance on risk management standard describes how an organization can manage risks specifically related to AI. It can help standardize the technical language characterizing underlying principles and how these principles apply to developing, provisioning or offering AI systems. It also covers policies, procedures and practices for assessing, treating, monitoring, reviewing and recording risk. It's highly technical and oriented toward engineers rather than business experts.
- NIST AI Risk Management Framework (AI RMF 1.0) guides government agencies and the private sector on managing new AI risks and promoting responsible AI. Abhishek Gupta, founder and principal researcher at the Montreal AI Ethics Institute, pointed to the depth of the NIST framework, especially its specificity in implementing controls and policies to better govern AI systems within different organizational contexts.
- Nvidia/NeMo Guardrails provides a flexible interface for defining specific behavioral rails that bots need to follow. It supports the Colang modeling language. One chief data scientist said his company uses the open source toolkit to prevent a support chatbot on a lawyer's website from providing answers that might be construed as legal advice.
- Stanford Institute for Human-Centered Artificial Intelligence (HAI) provides ongoing research and guidance into best practices for human-centered AI. One early initiative in collaboration with Stanford Medicine is Responsible AI for Safe and Equitable Health, which addresses ethical and safety issues surrounding AI in health and medicine.
- "Towards unified objectives for self-reflective AI" is a paper by Matthias Samwald, Robert Praas and Konstantin Hebenstreit that takes a Socratic approach to identify underlying assumptions, contradictions and errors through dialogue and questioning about truthfulness, transparency, robustness and alignment of ethical principles. One goal is to develop AI meta-systems in which two or more component AI models complement, critique and improve their mutual performance.
- World Economic Forum's "The Presidio Recommendations on Responsible Generative AI" white paper includes 30 "action-oriented" recommendations to "navigate AI complexities and harness its potential ethically." It includes sections on responsible development and release of generative AI, open innovation and international collaboration, and social progress.
Best ethical AI practices
Ethical AI resources are a sound starting point toward tailoring and establishing a company's ethical AI framework and launching responsible AI policies and initiatives. The following best practices can help achieve these goals:
- Appoint an ethics leader. There are instances when many well-intentioned people sit around a table discussing various ethical AI issues but fail to make informed, decisive calls to action, Roselund noted. A single leader appointed by the CEO can drive decisions and actions.
- Take a cross-functional approach. Implementing AI tools and technologies companywide requires cross-functional cooperation, so the policies and procedures to ensure AI's responsible use need to reflect that approach, Roselund advised. Ethical AI requires leadership, but its success isn't the sole responsibility of one person or department.
- Customize the ethical AI framework. A generative AI ethics framework should be tailored to a company's own unique style, objectives and risks, without forcing a square peg into a round hole. "Overloaded program implementations," Gupta said, "ultimately lead to premature termination due to inefficiencies, cost overruns and burnouts of staff tasked with putting the program in place." Harmonize ethical AI programs with existing workflows and governance structures. Gupta compared this approach to setting the stage for a successful organ transplant.
- Establish ethical AI measurements. For employees to buy into an ethical AI framework and responsible AI policies, companies need to be transparent about their intentions, expectations and corporate values, as well as their plans to measure success. "Employees not only need to be made aware of these new ethical emphases, but they also need to be measured in their adjustment and rewarded for adjusting to new expectations," explained Brian Green, director of technology ethics at Markkula Center for Applied Ethics.
- Be open to different opinions. Engaging a diverse group of voices is essential, including ethicists, field experts and those in surrounding communities that AI deployments might impact. "By working together, we gain a deeper understanding of ethical concerns and viewpoints and develop AI systems that are inclusive and respectful of diverse values," said Paul Pallath, vice president of the applied AI practice at technology consultancy Searce.
- Take a holistic perspective. Legalities don't always align with ethics, Pallath cautioned. Sometimes, legally acceptable actions might raise ethical concerns. Ethical decision-making needs to address both legal and moral aspects. This approach ensures that AI technologies meet legal requirements and uphold ethical principles to safeguard the well-being of individuals and society.
Future of ethical AI frameworks
Researchers, enterprise leaders and regulators are still investigating ethical issues relating to responsible AI. Legal challenges involving copyright and intellectual property protection will have to be addressed, Gupta predicted. Issues related to generative AI and hallucinations will take longer to address since some of those potential problems are inherent in the design of today's AI systems.
Enterprises and data scientists will also need to better solve issues of bias and inequality in training data and machine learning algorithms. In addition, issues relating to AI system security, including cyber attacks against large language models, will require continuous engineering and design improvements to keep pace with increasingly sophisticated criminal adversaries.
"AI ethics will only grow in importance," Gupta surmised, "and will experience many more overlaps with adjacent fields to strengthen the contributions it can make to the broader AI community." In the near future, Pallath sees AI evolving toward enhancing human capabilities in collaboration with AI technologies rather than supplanting humans entirely. "Ethical considerations," he explained, "will revolve around optimizing AI's role in augmenting human creativity, productivity and decision-making, all while preserving human control and oversight."
For more on generative AI, read the following articles:
AI content generators to explore
The best large language models
Attributes of open vs. closed AI explained
Generative models: VAEs, GANs, diffusion, transformers, NeRFs
AI ethics will continue to be a fast-growing movement for the foreseeable future, Green added. "[W]ith AI," he acknowledged, "now we have created thinkers outside of ourselves and discovered that, unless we give them some ethical thoughts, they won't make good choices."
AI ethics is never done. Ethical judgments might need to change as conditions change. "We need to maintain our awareness and skill," Green emphasized, "so that, if AI is not benefiting society, we can make the necessary improvements."