Getty Images

US AI policy for federal agencies requires transparency

The OMB's new policy calls for federal agencies to be transparent about AI use and designate chief AI officers to coordinate efforts.

The U.S. government has taken another step toward moderating its artificial intelligence use by implementing a policy requiring federal agencies to be transparent about AI use.

The White House Office of Management and Budget (OMB) set the new AI policy Thursday, demanding that federal agencies report annually on their AI use, particularly those affecting health and safety, and establish safeguards like testing and public impact monitoring. The directive covers AI use across various federal sectors, including health, education, employment and housing.

The policy aims to drive "federal accountability and oversight of AI," the White House said in a statement.

"With these actions, the administration is demonstrating that government is leading by example as a global model for the safe, secure and trustworthy use of AI," according to the statement.

The White House's approach to implementing an AI policy for federal agencies indicates that while the government acknowledges AI risks, it plans to address those risks while also taking advantage of AI, said Darrell West, senior fellow in the governance studies program at the Brookings Institution.

It means the federal government is going to be using more AI but also paying particular attention to safety and transparency.
Darrell WestSenior fellow, Brookings Institution

"It means the federal government is going to be using more AI but also paying particular attention to safety and transparency," he said. "There's a huge increase in transparency, there's going to be a lot more information coming out of federal agencies."

OMB AI policy outlines transparency, safety measures

Through the implementation of the OMB AI policy, agencies will be required to provide information about AI use. For example, the release said travelers will be aware of airport security using facial recognition technology and will be able to opt out. The policy will also ensure human oversight of AI systems used in the federal healthcare system and other government services.

If safeguards cannot be applied to meet the AI policy requirements, the federal agency "must cease using the AI system," according to the policy.

Federal agencies will also be required to designate chief AI officers to coordinate AI use, and establish AI governance boards. The Biden administration has also set a goal of hiring 100 AI professionals by this summer.

Growing an AI workforce might challenge federal agencies, particularly as they compete against private companies, West said.

"There's going to have to be a big emphasis on professional development to upgrade the job skills in the AI area," he said.

The OMB AI policy marks an important step for the federal government as it leads by example in the responsible use of AI, said Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology, in a statement.

"Rather than have individual agencies grapple with governing and mitigating risks of AI on their own, when many agencies are trying to solve similar problems in these areas, we are glad to see detailed guidance to support agencies' responsible and transparent use of this technology," Givens said.

The OMB AI policy supports transparency and consistent processes across agencies, and elaborates on agencies' ability to responsibly procure AI systems, she added.

Indeed, the OMB AI policy functions as an accountability mechanism for federal agencies, while being consistent with industry principles for AI use, said Maya Wiley, president and CEO of The Leadership Conference on Civil and Human Rights, in a statement.

"We must ensure that technology serves us rather than harms us," Wiley said. "Today, the OMB's guidance takes us one step further down the path of facing a technology-rich future that begins to address its harms."

The AI policy delivers on some of the mandates issued in Biden's executive order on AI. It also builds on other federal guidelines developed to address risks and benefits of AI, including the Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology AI Risk Management Framework.

Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Next Steps

Evaluate whether your organization needs a chief AI officer

Biden, Trump show stark differences on tech policy

U.S. begins rulemaking for AI developers on riskiest models

Former OpenAI associates fear AGI, lack of U.S. AI policy

Dig Deeper on CIO strategy