Funtap - stock.adobe.com
How AI agents are the next wave of generative technology
As agentic technology grows more popular, some are optimistic about what appears to be the next phase of AI. However, there are concerns about the tasks agents are doing.
NEW YORK -- AI agents are much talked about new tools that more vendors and AI developers are pushing into the marketplace.
Despite the excitement, some are hesitant about the new technology that is being touted as the next wave of generative AI.
The cautious excitement and curiosity about AI agents was palpable at the AI Summit conference here on Wednesday.
The next level
“AI agents are here; they’re here going to scale,” said Tim Cotten, CEO and founder of Scripted Inc., a generative AI game development platform powered by autonomous agents.
Speaking during a session about AI agents and game development, Cotten predicted that a third of the companies at the summit will likely be “agentified” soon.
Just as humans can allocate their resources under the market system, AI technology can do the same, he added during an interview with Informa TechTarget Editorial.
“Agents allow you to spread your influence while you’re asleep,” Cotten continued. “They literally can just do the job for you and come up with more ideas and then create more agents to represent new ideas.”
However, a big source of confusion about AI agents is when those who use them try to make them do too much, Cotten said.
Kenneth Malac, founder of Paradigm Interactive, also said he sees agents as the next big evolution of AI technology. Paradigm Interactive helps enterprises create business intelligence insights using data visualization.
“There’s a big opportunity,” Malac said in an interview.
However, the ROI of agentic AI is still unknown, he said.
“The issue of how we fit agents into empowering employees [is still not clear], he added.
One versus multiple agents
Another issue is making sure agents perform a single task.
“If you have a large application and it has different components, it’s better to have a specialized agent sitting on top of that delivery only,” Atif Khan, chief artificial intelligence officer at Semantex, which develops an AI-based content intelligence platform, said in an interview.
Semantex is a division of Messagepoint, a customer communications management provider.
For example, there should be an agent for search, an agent for documentation, another one for accounting and still others that take in complaints or provide customer service, he said.
“Building those agents in that fashion so they’re modular, they’re independent, you can train one better than the other,” Khan continued. “It has to be a heterogeneous set of agents instead of one large agent trying to control everything.”
Navigating agentic hallucinations
Focusing agents on specific tasks will also reduce the complexity of the job the generative AI-based agent must do.
Then there’s the challenge of making sure agents stay on task and don’t hallucinate in a setting in which multiple AI agents and working and together and communicating.
“It’s important for organizations to think about where human review needs to occur,” Drayton Wade, chief operating officer at Kognitos, an AI and process automation vendor, said in an interview.
When a string of agents is present in finance, for example, a systematic risk could occur because of the speed and scale of the AI agents' work.
One way to fix this is with human validation or by putting a human in the loop who double-checks what the agent is doing.
“The more you can log the activities of the agents, that helps create a record that we can monitor,” Wade said. “It requires people to think about where and how agents are used for what use cases, and where are they allowed to make decisions by themselves.”
Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.