Getty Images

Government use of AI by key U.S. military branches

The Air Force and Army are exploring applications of GenAI including using their own large language models. While some use cases are successful, others are not.

On Oct. 7, 2023, the U.S. Central Command, which is responsible for defense operations in the Middle East and Central Asia, was stunned when Hamas attacked Israel, setting off a brutal war in Gaza.

With Iran-backed militias increasing attacking U.S. military bases in Iraq and Syria, the government organization’s operations were disrupted.

"Within a 24-hour period, the pace and operational tempo of our command changed," said Schuyler Moore, CTO of  U.S. Central Command, during a panel discussion on Oct. 8 at the Nvidia AI Summit in Washington D.C.

While this meant a big shift on the ground, it also led to a management change.

Operationally, Central Command saw an increase in the number of meetings, as well as the need for summaries of those meetings.

"Suddenly, this crushing weight descended on our staff for the requirement to suddenly participate, summarize and push all of this information that was related to critical events in a timely way not only within our own command, but also outside of our command, to the National Security Council and more," Moore said.

Finding a way with GenAI

One way Central Command explored dealing with this disruption was with generative AI.

With speech recognition technology such as Whisper from OpenAI, it seemed that the way to address this problem was to break apart audio recordings  and generate summaries.

However, that approach did not have the desired effect, Moore said.

Recipients of the summaries had different requirements about what they considered important.

Through the experience, Central Command realized that using generative AI for summaries was not helpful.

However, the failed use case was just the beginning for the government military agency.

From early January to February, the Command found a way to get a large language model (LLM) available on the government's classified network, Secured Internet Protocol Router network (SIPR).

"Working on a classified network is like working in a barren wasteland," Moore said. "You are not able to access the open internet."

However, most of the work Moore's team does is on SIPR, with nearly no work on an unclassified classified network like Google.

"If we need applications built, we need them on SIPR," she said. "If we need tools available, we need them on SIPR."

An LLM on SIPR was useful for several applications.

First, for code augmentation and generation.

Previously, programmers would need to use two different networks: SIPR, and Google on another computer used for unclassified information.

With the LLM on SIPR, everything was in a single location.

"We think the obvious reason that it was particularly successful is it's quite easy to catch errors very candidly with quick utility, but also very quick to notice if your code just doesn't generate the output and low risk in general, if it is not doing it correctly," Moore said.

Another use of generative AI for Central Command is machine-assisted disclosure.

This means that the LLM can help a human determine what documents are classified and cannot be disclosed and what documents can be disclosed.

Finally, the Central Command has found that generative AI can still help with summarization, but its capabilities are limited.

Instead of feeding the LLM multiple documents and having the LLM spit out a “magic” summary, the Central Command learned that the model could lead a user to a specific part of a document so that the user knows what they need to focus on.

"Treating large language models as a cog in a broader wheel, it is not the wheel. It will not do the entire process for you and spit out your outcome," Moore said.

Central Command is not the only government agency using generative AI as a tool.

The Air Force and NIPRGPT

The Air Force has also created its own AI chatbot, called NIPRGPT.

NIPRGPT is part of the Dark Saber software platform developed at the Air Force Research Laboratory.

Dark Saber is a group of U.S. Air Force innovators and developers who create software systems.

NIPRGPT enables users to have human-like conversations with it. As a generative AI tool, it can answer questions and assist with tasks such as coding within a secure environment.

"When we originally set out to host this, in the beginning, we thought that most people wouldn't use it," Collen Roller, a senior computer scientist at the Air Force Research Laboratory, said during the panel discussion. He added that the hypothesis was that while there was demand for ChatGPT and other generative AI chatbots in the consumer world, that may not be the case in the government sector.

However, that hypothesis proved to be wrong.

"The demand for GenAI in the Department of Defense is real," Roller said. "It's something that we collectively need to care about and focus on. And we had a lot of interest in NIPRGPT right out of the gate."

Many people used NIPRGPT for toil reduction tasks such as creating presentation outlines or roll-ups of different meetings, he added.

Air Force officials also feared that NIPRGPT could be misused.

"We thought that when people first went to a government GPT, we thought that people were going to be asking maybe how to make something destructive, or trying to ask some ridiculous questions," Roller said. "We have found that that's not the case. People are just trying to get time back."

The Army and CamoGPT

Saving time is also important to the U.S. Army which is testing out a GPT product called CamoGPT. CamoGPT currently is being used by about 10,000 Army members.

Testing this model has led the Army to ask hard questions about whether the technology is worth investing in and what soldiers will use it for, said Isaac Faber, director of the U.S. Army AI Center, during the same panel discussion with Moore and Roller.

"From the Army's perspective, we're going to experiment a lot on … [and look at] what it means to adopt some of these technologies," Faber said.

While the Army and others in the government sector are taking advantage of generative AI technology, they are doing so cautiously.

GenAI risk

"There's a risk dimension of AI," said Young Bang, principal deputy assistant secretary of the Army for acquisition, logistics and technology during a different panel exploring scaling generative AI technology.

Policies governing the use of the technology, including the use of watermarks, in the Army aim to ensure that data is protected.

The Army is also aware of how AI technology is used against the U.S.

"How do we counter AI, or how are our peer threats looking at AI and using it on us?" Bang said. "Counter AI is critical for us."

The Army is also exploring what artificial general intelligence (AGI), a concept that defines AI as capable of doing anything  a human can do, will mean.

"When you get to AGI, it's really behaviors and outcomes that we have no idea what the outcome should be," Bang said.

"We're trying to look two, three steps ahead of what are going to be the roadblocks for us to adopt at mass and scale, and the volumes that we're talking about, we're trying to address that now," he added.

Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.

Dig Deeper on Enterprise applications of AI