Getty Images

Explore the role of AI in the U.S. federal government

Reported federal AI use cases jumped from 710 in 2023 to 2,133 in 2024. The staggering growth rate could continue to rise under the Trump administration.

With a new administration in Washington and a $500 billion AI infrastructure push underway, the United States federal government could be entering a period of accelerated AI adoption.

Federal use of AI expanded under the Biden administration, with many federal agencies already using AI for tasks like fraud detection, administrative workflows and data analytics. But experts say the Trump administration is aiming to take that growth even further.

"Trump and some of his advisers have talked about 'unleashing AI,' meaning using it much more extensively within agencies," said Darrell West, a senior fellow at the Brookings Institution's Center for Technology Innovation.

As the Trump administration moves to scale back AI safety measures and strengthens ties with big tech and AI development, experts predict an uptick in federal AI use. Yet fair and transparent adoption -- alongside a healthy dose of education for the public -- is needed to earn constituents' trust in these technologies.

Federal AI governance

The federal government began laying the groundwork for AI governance under the first Trump administration, which enacted a pair of AI-related executive orders: EO 13859 in February 2019 and EO 13960 in December 2020.

The latter, which focused on trustworthy AI, created an annual inventory that requires federal agencies to report their use of AI tools and technologies. Under the Biden administration, reported use cases more than tripled from 710 in 2023 to 2,133 in 2024.

Reggie Townsend, vice president of data ethics at SAS and a member of the National AI Advisory Committee (NAIAC), emphasized the importance of EO 13960.

"That use case inventory was a very useful first step," he said. It increases transparency on how the government uses AI tools and services, which helps build trust -- an essential component of the relationship between a government and its constituents.

In October 2023, then-President Joe Biden signed EO 14110, introducing stricter guardrails for both internal and external AI use. That order established a government-wide effort to promote responsible AI development and deployment through industry regulation and safety initiatives.

Internally, EO 14110 instructed all federal agencies to designate chief AI officers to coordinate AI use. It also tasked the Office of Management and Budget with providing guidance for federal agencies on collecting, reporting and publishing AI use cases per the federal inventory enacted by EO 13960.

It also added new stipulations to the reporting process. These included, for example, disclosure requirements for use cases affecting safety and civil rights, a detailed rubric for use case inclusions, and deadlines for meeting risk management requirements.

"The agencies are making progress," West said. "Anything they can do to educate the general public and inform people about how AI is being used ... is very helpful."

Current use cases for AI in government

According to the 2024 consolidated federal use case inventory, 41 federal agencies reported 2,133 public AI use cases. Not every agency was included, as some lacked use cases requiring disclosure, but AI use is trending upward across departments.

The Department of Health and Human Services (HHS) reported the highest number of use cases -- 271 -- representing a 66% increase from the previous year. Among the many applications reported, certain use cases have proven particularly beneficial.

"AI is starting to be used in the federal government ... for fraud detection," West said.

He highlighted AI applications designed to identify unusual behaviors in financial transactions. One such example is the Social Security Administration's Representative Payee Misuse Model, implemented in September 2023, which flags possible representative payee fraud for further review.

"AI is [also] being used for internal administrative processing, to automate routine tasks," West said.

Beyond these efficiencies, AI can foster collaboration across the sprawling conglomerate of federal agencies and subagencies. Townsend emphasized AI's ability to curate and evaluate data, which enables more effective cross-agency work.

"One of the great benefits that AI brings to the federal government is an ability to aggregate that knowledge in useful ways," he said.

For instance, AI models can combine data from the Department of Energy and HHS to assess how energy policies affect public health, Townsend explained. Likewise, looking at Department of Education data through the lens of certain HHS information can provide insights into health policy's effects on education outcomes.

Bruce Schneier, adjunct lecturer in public policy at the Harvard Kennedy School, pointed out that this is only the beginning for AI in government. He imagines a future where AI is increasingly ingrained in government processes -- for instance, creating content such as press releases, speeches, legislation, contracts and audits.

AI's ability to fill skill gaps and support humans with overwhelming workloads makes its benefits for government functions very promising. But even with the growing interest in adopting AI, the U.S. federal government rollout has been relatively slower than the private sector, West said.

This hesitancy is especially pronounced with generative AI. Despite growing interest, West said, adoption remains cautious due to the technology's newness and associated risks. Issues such as bias, hallucinations and lack of interpretability have already arisen, raising concerns about reliability for government use cases.

AI under a second Trump presidency

President Donald Trump returned to the White House in January 2025, ushering in a new era that many expect to be marked by limited regulation and a push for global dominance across industries -- including AI.

"Federal AI usage is really going to accelerate under the Trump administration," West said. One early indicator of this shift is Trump's effort to bring prominent tech leaders into the federal fold.

"These are people who really have the background to understand how [AI] tools operate and how they can be deployed," West said.

For example, Trump has asked xAI owner Elon Musk to lead the newly rebranded Department of Government Efficiency. This agency, formerly the United States Digital Service, is tasked with "modernizing Federal technology and software to maximize governmental efficiency and productivity," as stated in its founding executive order.

With Musk at the wheel, the new department is expected to focus on cutting government costs and driving widespread deregulation. But the specifics of AI's role in this modernization effort remain to be seen.

In line with his campaign rhetoric on reducing AI oversight and widespread removal of Biden-era policies, Trump also rescinded Biden's EO 14110 on his first day in office. Rescinding that order, which had implemented guardrails for AI development and deployment in federal agencies, signaled a pivot toward fewer restrictions.

"That executive order tried to create some guardrails in the use of AI in federal agencies," West said. "So, we don't know exactly how Trump will use AI. Will he get rid of all guardrails? Will he keep some protections and get rid of others? We don't really know the answer to that, but it's certainly something important to watch."

So far, the Trump administration has centered its AI approach around minimizing guardrails and developing AI "rooted in Free Speech and Human Flourishing." But the actual implications for governance, including whether policies around safety and transparency will remain, are not yet clear.

The White House and big tech

At Trump's inauguration, Elon Musk, Mark Zuckerberg, Jeff Bezos and Sundar Pichai took front-row seats, signaling the elevation of private sector technologists to the inner sanctum of the federal government. Tech companies including Amazon, Apple, Google, Meta and Microsoft each donated $1 million to the event, with OpenAI CEO Sam Altman contributing a personal $1 million.

Some companies are also aligning with the new administration's stances on content moderation and diversity, equity and inclusion (DEI) policies. Meta, for example, has replaced its fact-checking services with a looser, community-driven system styled after X's Community Notes, in addition to relaxing moderation policies and discontinuing internal DEI initiatives.

Big tech's enmeshment with the federal government could benefit both sides. A softer federal hand on AI safety, sustainability and training data copyright issues could give companies like OpenAI and Microsoft more freedom, while their advancements could support the administration's push for global AI dominance.

AI's future role in government

The U.S. federal government appears ready to accelerate AI adoption. Just two days after his inauguration, Trump announced a $500 billion AI infrastructure investment. As part of this initiative, four major companies -- OpenAI, SoftBank, MGX and Oracle -- are collaborating to form Stargate, a new entity dedicated to expanding AI infrastructure in the United States, including developing data centers nationwide.

During the White House announcement, Trump called the formation the "largest AI infrastructure project by far in history." He also emphasized the importance of AI development in maintaining U.S. leadership in the global AI race, particularly against China.

It's going to be really important that when [the Trump administration] starts to accelerate usage, they do it in a fair manner.
Darrell WestSenior fellow, Brookings Institute Center for Technology Innovation

If the U.S. is indeed headed toward increased AI use, the way the government adopts the technology will matter, West said.

"Federal agencies have more serious privacy and security interests than an individual consumer might have," he explained, pointing to legal considerations that are unique to government agencies. Technologists like Musk, for instance, might not have encountered these issues in private industry.

"It's going to be really important that when [the Trump administration] starts to accelerate usage, they do it in a fair manner," West said. This includes ensuring that relevant privacy, security and transparency measures are in place for all use cases.

Building AI literacy within the U.S. government

Another aspect of careful adoption is the role of education. "There's a lack of expertise in many federal agencies on how to use AI," West said. Without training, it's difficult for workers to ascertain what AI tools will be useful, how they should procure and implement those tools, and how teams should use them most effectively.

A major part of Townsend's role as a NAIAC member is building AI literacy, including among federal workers.

"The federal workforce needs to have foundational knowledge about [AI], just like the rest of the country," Townsend said. He suggests that training should be creative and role-specific, since not everyone needs to understand AI in the same way or to the same extent.

Making AI accessible to the public is important, too, Townsend said. Tools like the National AI Research Resource can help. Created under the National AI Initiative Act of 2020, NAIRR extends AI infrastructure and research capabilities to the public, furthering equitable access to AI development and education.

Townsend calls NAIRR part of the "ecosystem of enablement": It helps create an informed and active public, who can then more confidently participate in the conversation around deploying AI in public services.

"The public has to be fluent enough in this conversation to be able to hold its government accountable," Townsend said.

Olivia Wisbey is the associate site editor for TechTarget Enterprise AI. She graduated with bachelor's degrees in English literature and political science from Colgate University, where she served as a peer writing consultant at the university's Writing and Speaking Center.

Dig Deeper on AI technologies