arthead - stock.adobe.com

AI agents raise stakes in identity and access management

IT vendors roll out fresh tools to take on identity and access management for AI agents as enterprises deploy them internally and battle malicious ones externally.

A twin challenge for identity and access management is emerging alongside AI agents: setting effective security rules for unpredictable nonhuman actors and keeping a burgeoning army of malicious agents out of enterprise networks.

AI agents are software entities backed by large language models (LLMs) that can autonomously use tools to complete multistep workflows. While still in its infancy, agentic AI is widely considered to be the future of generative AI apps as standard orchestration frameworks and agent-building tools mature.

Some cybersecurity practitioners say existing practices are enough to defend against unwanted actions from authorized agents that companies will deploy. Others are developing tools that blend machine and human identities to mitigate agentic AI threats.

There will also be cases where enterprises want AI agents to access data on their networks. Here, some experts predict that devising guardrails for agentic AI environments will be harder and riskier than for humans and traditional machine workloads, especially given that generative AI remains new and prone to unpredictable errors.

Gang Wang, associate professor of computer science, University of Illinois Urbana-ChampaignGang Wang

"Think about an agent that's performing scheduling within a data center or making resource allocation decisions within the cloud," said Gang Wang, an associate professor of computer science at the University of Illinois Urbana-Champaign. "The agent may be, on average, more efficient and effective at allocating a resource to the right nodes than a human, but they might have some catastrophic decision-making if [a task] is out of their training range."

Prompt engineering also factors into the potential hazards of agentic systems by worsening an existing problem for web-based apps, Wang said.

"There is a security challenge that's been there for decades, which is correctly separating data from command," he said. "This has been a problem that web security people have been trying to solve, and there are still issues here and there that cause services to be compromised because of attacks like SQL injection."

Now, factor in not just text and code prompts for LLMs, but images and videos, and anything displayed on a computer screen could potentially be interpreted by an AI agent as a prompt, Wang said. The consequences of that could also be hard to predict.

"Imagine if you visit a website that has an image with little words in it that says, 'Delete your inbox,'" he said. "One of my students just ran a demo to show that's actually doable. Computer-use models will take a screenshot and take those little words as a command and execute them."

Facilitating access for internal AI agents

Another wrinkle for identity and access management in agentic AI environments is supporting wanted connections between AI agents and their tools, including those external to a company, without IT teams having to set up authentication and authorization for services ahead of time. Passwordless web authentication vendor Stytch launched a product, Connected Apps, in February to address that scenario.

This week, Stytch added a Remote MCP Authorization feature for Connected Apps to support remote Model Context Protocol servers, including those launched by Cloudflare on March 25. These services build on a March update to Anthropic's AI agent framework that added support for OAuth, but address community criticisms about how the MCP spec handles OAuth. Okta subsidiary Auth0 is also part of Cloudflare's partnership program for remote MCP servers.

It will take time for agentic AI to be ready for prime time in customer-facing environments like the one maintained by Crew Finance, a fintech startup in Lehi, Utah. In the meantime, Crew co-founder Steve Domino said he's considering Connected Apps for use with the company's chatbot, Penny.

"In the future, where people are really comfortable with AI agents doing things on their behalf, she could go sign you up for [a new] insurance company ... or secure a loan," Domino said. "The way that we'll do that securely is by having her use something like Connected Apps [so that] we can issue tokens so that she can securely connect to other agents, or we can connect other AI agents to Crew, and then [manage] permissions."

To more effectively manage access to corporate data in anticipation of agentic threats, global satellite network operator Aireon uses identity security software from Oleria. These tools centralize visibility into which identities can access which data, and change those permissions programmatically as needed on both internal and third-party systems.

The same contradictions and functions are there that have always been there [between digital and human identities]. What's different is, things are happening much faster and with a much greater depth of information.
Peter ClayChief information security officer, Aireon

"If I see an account name get exposed along with the password and user ID, it used to take a couple days to figure out everything it had access to, what we needed to protect and how we need to protect it," said Tom Rudolph, senior manager of enterprise IT at Aireon. "It was a very manual process. Now, we can pull up one pane of glass and go, 'Show me everything that account has access to,' and we can change those permissions on the fly."

Rudolph is using an agent-building framework called Kindo to develop an agentic version of Oleria for Aireon's environment. To some extent, the scale of agentic automation will require AI agents to secure it, too, according to Peter Clay, chief information security officer at Aireon.

But there are also some unanswered questions and inherent risks around agentic identity and access management, Clay said.

"The same contradictions and functions are there that have always been there [between digital and human identities]. What's different is, things are happening much faster and with a much greater depth of information," he said. "I think the market is going to do away with human-based authentication completely, and you're going to start to see more algorithm skipping cryptography synchronization processes and things like that."

Containing malicious AI agents

AI agents in the hands of attackers can operate at a scale beyond human capabilities and more cleverly disguise themselves than traditional malware, according to Reed McGinley-Stempel, co-founder and CEO at Stytch.

"We have data on the percentage of headless browsers being used against our customers ... In 2024, it went from 3% of all traffic to 8% of all traffic ... Still not a huge number, but a lot of those probably are agentic [or] headless browsing use cases where they're trying to scan for vulnerabilities," McGinley-Stempel said. "So that's one big topic I think about, where it's now much more viable for fraudsters to do the scanning and detection of vulnerabilities."

Another topic of focus for McGinley-Stempel arose with tools such as OpenAI's Operator, Anthropic's computer-use API and Browserbase's Open Operator, which convincingly mimic a human operating a computer to produce website traffic. With a hijacked version of such a tool and a farm of cheap devices, an attacker could be more difficult to detect with defensive methods that look for programmatically generated traffic from a single source, he said.

"Agents blend and blur those lines," McGinley-Stempel said.

Some IT security executives believe that defending against malicious AI agents requires a fundamental shift in identity and access management approaches -- for one CEO, enough of a sea change to prompt a rethink of his company's product.

"The first few versions of our system, we focused on the identities of humans and their laptops, but now we are launching a machine and workload identity product," said Ev Kontsevoy, co-founder and CEO at Teleport, a secure systems access vendor.

Teleport Machine & Workload Identity, launched Feb. 25, is part of the broader Teleport Infrastructure Identity Platform that combines zero-trust access controls, machine and workload identity, and cryptographic identity. It's not unlike the Private Cloud Compute environment that Apple introduced specifically for AI training in 2024, but packaged for enterprises that don't have big tech's engineering resources to build their own, Kontsevoy said.

What's old is new again?

Stytch's McGinley-Stempel, meanwhile, posited that his company's existing device fingerprinting and automatic rate-limiting features would help websites detect and slow down malicious AI agents attempting to pose as humans more effectively than banning traffic from computer-use agents entirely or restricting IP addresses.

"The same things that we built in order to detect click farms work quite well with the way that these computer-use API attacks get set up," he said. "It creates a pooled identifier of these different hardware and network fingerprints that are commonly associated with that type of abuse behavior, and then creates risk scores on them so that [users] can dynamically rate limit those types of [traffic] clusters."

There are limitations to digital fingerprinting and rate limiting, McGinley-Stempel acknowledged, depending on their implementation, and they don't resolve every identity and access management issue for agentic AI.

"You can at least change the economics of whether your site will be targeted for this, because [attackers] will likely move to the sites that are not doing that type of thing," he said.

Another software company founder also disputed the idea that AI agents require an overhaul of identity management tech.

"The bottom line is, it doesn't matter if you are trying to secure a human identity or a machine that is assuming a human identity role. If you are giving someone the ability to take action on your behalf, there are checks and balances that need to be in place, and that doesn't change," said Amit Govrin, co-founder and CEO at Kubiya, which launched a Kubernetes-based agentic AI platform at KubeCon + CloudNativeCon Europe this month.

The Kubiya platform builds in attribute-based access controls for agents enforced by Open Policy Agent and supports user-set permissions and time-to-live configurations for agent access.

While the technology to lock down agentic AI systems isn't necessarily new, there is one significant difference with AI agents, in Govrin's view.

"We have an even higher responsibility to ensure agent-actors don't receive permanent roles, because they will become even more prevalent than humans in the future, [and] the blast radius can be that much bigger if left unchecked," Govrin said. "It's the same threat vector with a different form factor."

Beth Pariseau, a senior news writer for Informa TechTarget, is an award-winning veteran of IT journalism covering DevOps. Have a tip? Email her or reach out @PariseauTT.

Dig Deeper on IT systems management and monitoring