Google on Wednesday introduced new ways to build and manage multi-agent ecosystems with Vertex AI and an Agent2Agent Protocol.
Also, at its Google Cloud Next 2025 conference in Las Vegas, the vendor introduced enhanced reasoning for Gemini 2.5 models and new features for its multimodal models on the Vertex AI platform.
The cloud provider's focus on agents (it also unveiled new agentic AI capabilities across its applications Wednesday) reflects the dominant current trend in generative AI of autonomous agents that enterprises can use to carry out business tasks.
"What the market is now starting to build are the foundations you need to create more useful agents," said Gartner analyst Chirag Dekate. "Agents, at the end of the day, are only as useful as the ecosystems they live and operate and mechanize within."
Agent building tools
One way Google is appealing to enterprises is with new agent-building tools in Vertex AI, one of which is the Agent Development Kit (ADK).
Google said ADK is an open source framework for designing agents and multi-agent systems. Users can use ADK to build an AI agent in fewer than 100 lines of code. It is available in Python, and users can use the kit to shape how agents think, reason and choose the model with which they want to work. Users can also deploy agents through integration with Vertex AI. ADK is optimized to work with Gemini and Vertex AI.
Google also introduced its Agent Engine, another agent building tool with which users can deploy agents in any framework. The engine connects to Agentspace, giving users an opportunity to create more agents.
The tech giant released Agentspace in December 2024. It is a tool that combines search and AI agents and seeks to help employees better understand data that is spread across their organization.
Google revealed that Agentspace is now integrated with Chrome Enterprise, so employees can use Agentspace's search capabilities from the Chrome search box. The cloud provider also introduced two new features in Agentspace. One is Agent Gallery, which is now generally available. It enables employees to identify all the new agents available within the enterprise. The second is Agent Designer, which is in preview. It is a no-code interface for creating custom agents that connect to enterprise data sources and automate knowledge work tasks.
Agentspace also includes two new expert agents and the previously available NotebookLM for Enterprise. They are: Deep Research agent, which explores challenging topics and synthesizes information from internal and external sources; and Generation Agent, which is available in preview. Google said it helps employees develop new ideas.
Agentspace also supports Google's new Agent2Agent (A2A) protocol. The A2A protocol is a multi-agent system that enables agents to communicate, regardless of the framework or vendor platform on which they are built.
More than 50 partners, including Deloitte, Elastic, Salesforce and UiPath, are working with Google on the protocol.
The protocol complements Anthropic's Model Context Protocol (MCP) by equipping agents with data using the Anthropic framework. MCP is a framework proposed by AI foundation model provider Anthropic in 2024 to connect AI systems with data sources. It has received industry support.
Google and agents
"They have built and are getting ready to lean into how enterprises build and operate agents," said Mark Beccue, an analyst at Enterprise Strategy Group, now part of Omdia.
They have built and are getting ready to lean into how enterprises build and operate agents.
Mark BeccueAnalyst, Enterprise Strategy Group
Google's approach to agents is to recognize that for enterprises, agents are the vehicle for them to get the creative potential of generative AI, Dekate said. Thus, with its platform integration and agent-to-agent ecosystem, Google is creating a foundational layer that allows it to compete effectively in the agentic AI market, he said.
"Some of these innovations, like Agent2Agent Protocols, will likely be far more impactful in the long run," he added. "It's less about one agent communicating with environments and more about multiple agents coordinating with one another to deliver the outcomes you need."
However, a multi-agent system is still an uncomfortable idea for enterprises, said Forrester Research analyst Rowan Curran.
"What we see today is that most agents operate either mostly independently … or are coordinating across a whole field of different agents, passing between them in some linear fashion," Curran said. "There's still a lot of unanswered questions around testing, validation and governance of these [multi-agent systems]."
With many agent-building systems now available from Google and competitors like Microsoft and Salesforce, Dekate said enterprises should be ready to experiment with and pilot the different agentic tools and technologies.
"When they pilot many of these agents, they're going to discover that genetic ecosystems today are evolutionary, not revolutionary," he said. "The model capabilities coming down the road with larger context sizes, with greater reasoning capabilities, will likely enable more powerful agentic ecosystems."
Current agents are basic, and nuances such as security and debugging are still being worked out and developed, Curran said.
For its part, Google addresses the security concerns of agentic building with some new capabilities within Vertex AI and Gemini. For example, users can manage agent permissions with identity controls that let them determine how agents operate. Users can also establish guardrails around agents to control their interactions.
With the agent building tools and engines, Google is trying to bridge the gap between the creation of prebuilt agents that already fit into established workflows and what experts hope to see in the future with operationalization of prebuilt agents that are much more high-scale, said Futurum Group analyst Bradley Shimmin.
"What we're seeing from Google at Google Cloud Next is this company creating all the pieces that would go into enabling enterprises to get there," he said. "When I look at what Google's got here, and to a lesser degree, Microsoft, I see all of the tools being gathered together that I would need as an enterprise to move from basic GenAI-based tasks to a more autonomous, multistep workflow-based software."
Gemini and multimodality
While Google is focused on helping enterprises build agents, the cloud provider also is emphasizing reasoning.
Nearly a month after introducing its Gemini 2.5 thinking models, the vendor launched Gemini 2.5 Flash. Google said Gemini 2.5 Flash is a workhorse model optimized for low latency and reduced cost. It will feature dynamic and controllable reasoning and can adjust processing time based on the difficulty of the query. The model will soon be available on Vertex AI.
To help users determine whether to use Gemini 2.5 Pro or 2.5 Flash, Google introduced Vertex AI Model Optimizer in experimental mode. The optimizer automatically generates the highest-quality response for each query based on desired quality and cost, Google said.
While Google is not the only vendor to offer an optimization tool, the tool shows how the cloud provider is helping enterprises avoid mistakes in choosing the wrong model for the task they want, Shimmin said.
"By introducing things like this model of optimizer, Google is helping enterprises to build more and explore more without being over their own feet, if you will, because they may be using the wrong model with the wrong settings for the wrong task," he said.
While Google offers other models in its model garden from providers like Anthropic and Mistral, enterprises are intrigued by Gemini, Dekate said.
"Google has gotten its act together in the last year or so, and it seems poised not to give up its lead," he said.
Google also said the Gemini models will be available on Google Distributed Cloud (GDC) in public preview in the third quarter.
GDC is a portfolio of hardware and software systems that extends Google's services to the edge and data center, according to Google. The cloud provider partnered with AI hardware and software vendor Nvidia to bring its Gemini models to Nvidia Blackwell systems.
The ability to access Gemini on-premises lets enterprises with locally based ecosystems access to proprietary models from Gemini, Dekate said.
These developments show how Google is moving forward in the AI market.
"Google has aggressively moved forward in an AI-native cloud experience," he said.
Google Agentspace search will also be available on GDC with a public preview starting in Q3 2025.
Google also added Lyria, its text-to-music model, to Vertex AI in preview, making Vertex AI a platform with generative media models across modalities such as video, image, speech and music. The vendor also added new editing and camera control features for Veo 2 image and video-generating system in preview.
Chirp 3, Google's audio generation model, now includes Instant Custom Voice, a new way to create custom voices with 10 seconds of audio input.
The Google releases come a day after Amazon introduced a new foundation model called Amazon Nova Sonic. The model combines speech understanding and generation to create more human-like voice conversations, Amazon said.
Google also added Imagen 3 Editing, a way to elevate the quality of object removal and a better editing experience.
Curran said that while Google currently has the advantage of enabling users to input audio, text, image and video, as well as to receive different modalities as outputs, this advantage might be short-lived.
"This space is moving very quickly, and I think it would be an overestimation to say that this is going to be a significant long-term advantage," he said.
"Everybody will be going after the multi-modality piece."
He added that the true innovation will come from what enterprises build with the Agent SDK and the multimodal models.
"If enterprises can feel like they can trust those models, then we can start to see some applications for doing some exciting things by stitching together all of those different modalities and being able to produce responses and content ... based upon all of this," Curran continued.
Google also revealed multiple partnership deals. One is a partnership in which Gemini on Vertex AI is one of the features that powers Reddit Answers.
Esther Shittu is an Informa TechTarget news writer and podcast host covering artificial intelligence software and systems.