Getty Images/iStockphoto

Google's Looker taking an agentic approach to generative AI

The analytics vendor aims to help customers develop agents, built on a foundation of trusted data, that enable deep analysis and relieve users of time-consuming tasks.

It's no secret -- agents are all the rage.

Two years after OpenAI's launch of ChatGPT sparked a surge of interest in generative AI, data management and analytics vendors are beginning to adopt an agentic approach to generative AI. That includes Looker, Google Cloud's flagship business intelligence platform, which unveiled a GenAI-powered data agent in preview in September.

Agentic AI is the concept of AI doing more than just assisting humans as they do their work. Instead, agentic AI tools take on tasks themselves, which in analytics and data management means doing things such as suggesting follow-up questions that get deeper into a subject than a single query, monitoring data for anomalies, surfacing insights and suggesting semantic modeling metrics to monitor.

In a sense, agentic AI is about AI tools being proactive rather than reactive to user requests. AI agents, meanwhile, are the conduit for agentic AI -- the interface through which users benefit from agentic AI.

Many data management and analytics vendors have introduced AI-powered assistants over the past two years that enable users to interact with their data using natural language rather than code. Recently, Tableau, a subsidiary of Salesforce, was among the first analytics vendors to add more proactive capabilities and rebrand its AI assistant as an agent.

Google's Looker is another analytics vendor making the switch to agentic AI rather than assistive AI. Looker, however, is trying to take a different approach to agentic AI from other vendors, according to Peter Bailis, vice president of engineering for Looker and AI in data analytics at Google Cloud, and formerly the founder and CEO of Sisu Data.

Looker, unlike most of its peers, features a semantic layer; among analytics platforms, MicroStrategy and Microsoft Power BI are two of the few others with semantic layers. Semantic layers enable data administrators to define metrics and standardize terms across their organization so that data is consistent and trustworthy no matter where in the organization it is needed.

Looker's AI agents are built with Looker's semantic layer, called Looker Modeling Language or LookML, as their foundation in a move aimed at providing users with AI-powered tools they can trust.

Peter Bailis, vice president of engineering for Looker and AI in data analytics, Google CloudPeter Bailis

Bailis recently discussed Looker's approach to generative AI, including its initial forays into adding generative AI capabilities in the months after ChatGPT's launch, where its AI initiatives stand today and what it has planned for the future.

In addition, Bailis spoke about how Looker is attempting to stand apart from its competition as well as how he sees generative AI transforming BI to make it a tool for all workers rather than only trained specialists.

Editor's note: This Q&A has been edited for clarity and conciseness.

The term agent and the idea of agentic AI have gained popularity in recent months. For those still unfamiliar with agents and agentic AI, what is an AI agent?

Peter Bailis: That's a question we're hearing from customers. People's view of AI today is informed by chat-based experiences like Gemini and ChatGPT where you ask a question and get an answer. It's like having a copilot next to you who is a reference librarian, a supersmart buddy.

Agents come into play when you go beyond simple questions and answers, and your buddy is going ahead and completing tasks. ... They are really about taking actions and making complex decisions with as little human intervention as possible.
Peter BailisVice president of engineering for Looker and AI in data analytics, Google Cloud

Agents come into play when you go beyond simple questions and answers, and your buddy is going ahead and completing tasks. The tasks can be multistep, require complex reasoning and may require clarification from the user, but they are really about taking actions and making complex decisions with as little human intervention as possible.

What then is the role of humans -- do they still need to be the ones checking the work of agents and approving any action before it's taken?

Bailis: It depends.

The user is in charge of the problem specification such as 'Build me a report on sales over the past week,' and then determining factors such as what 'sales' means and whether a week is a seven-day calendar week. Then there are certain inferences an agent can make on a user's behalf, because an agent's responsibility is to automate as much of the process as possible so a user doesn't have to keep clarifying details over and over.

In general, when there're ambiguity, the best practice is to come back to the human, because the agent is serving the user and the user has context.

In Looker and competing analytics platforms such as Tableau and Power BI, what is the role of agents?

Bailis: We've spoken for years about this concept of augmented analytics [with AI] taking on repetitive, routine tasks that take a lot of an analyst's time or don't even get done at all. For example, doing follow-up analysis such as breaking something down by category or doing a period-over-period comparison.

There are follow-up questions that always arise that help users get a deeper understanding of what their data has to say. Those can be automated much more easily with GenAI, and agents are able to stitch many of these follow-up steps together to tell a complete story.

When interest in generative AI surged following OpenAI's launch of ChatGPT two years ago, how did Looker initially incorporate it into its platform?

Bailis: Looker has always had some presence of conventional natural language question-and-answer capabilities. There is a feature called Ask Looker that does that pre-AI [natural language processing] where a user types in a question, and the tool does keyword matching. There was also some forecasting functionality in the platform.

When GenAI came out, in the early days of ChatGPT and even Google's Gemini, the fun thing to do was to figure out how to get the model to say the wrong thing. Even today, although models have gotten a lot better, there's huge concern around AI hallucinations, around factuality and grounding answers in a corpus of information you can trust. One of the key realizations early on from the Looker GenAI team was that factuality is the cornerstone of getting these products to be adopted. It's foundational to adoption.

What you expect from a dashboard or other analytics tool is not that it might be providing the right answer. You're looking for the real data -- concrete evidence that you can trust. That's very different than typical search scenarios or information retrieval.

How did Looker work toward engendering trust in GenAI in the months following the launch of ChatGPT and then other large language models such as Gemini?

Bailis: Very early on, Looker's GenAI strategy was to take some of its existing investments around augmented analytics and uplevel them with GenAI by taking the power of Gemini and something like Ask Looker, and making them work for a much more general set of questions than was previously possible with natural language query. With NLQ, if you didn't phrase your question exactly right, you wouldn't get an answer. GenAI allows you to ask a lot more questions, so how do you then make GenAI reliable for an analyst to trust?

Because Looker has a semantic layer -- because Looker has LookML -- you essentially have a human-certified set of semantics to give a model data that is human authored but machine readable. That's unique.

What does a semantic layer enable users to do that leads to trust?

Bailis: When you get a response from an agent, you can tie each and every one of the fields and data definitions back to a golden source of truth.

From the outset, the bedrock of the Looker GenAI strategy is to say that factuality is nonnegotiable. It was to leverage our semantic layer to make sure that users could trust our outputs while answering more and more of their questions over time. We started with simple questions about metrics and aggregates, and now we can do relatively sophisticated code generation on top of that same data foundation.

That's been super useful in gaining the trust of analysts who adopt these technologies and then roll them out to their user population.

If Looker's initial GenAI strategy focused on trusted output, what have been some GenAI-related developments since then?

Bailis: There are two buckets.

One is that some of the most painful parts of doing analytics are the tasks that have to be done. We've built GenAI functionality through Gemini and Looker to accelerate these tasks. Now, there's everything from LookML generation so you can now generate and edit LookML with coding assistance functionality -- that's in public preview -- to formula editing and slide generation if you need to develop a Looker Studio report and share it with your team.

That's the first bucket, which is taking some of the painful parts of analysis and speeding it up.

What's the second bucket?

Bailis: The flagship feature we've developed is what we call Conversational Analytics.

Everyone likes the Gemini or ChatGPT type of interface where you ask a question and get an answer. But today, if you want to use one of those systems to get business insights, you have to upload a [table] or a spreadsheet. In Looker, you're already connected to your organization's data, so the challenge is to enable users to ask any question about their organization's data and get a trusted answer.

What we've done to make that a reality is use the latest and greatest Gemini models that we plug into the Looker ecosystem so you have a Looker data source and all the Gemini reasoning on top of it. Also, because Conversational Analytics is built on the LookML semantic model, users can trace the lineage of answers back to a definition to know it's a trusted answer. Alternatively, if the Looker conversational agent can't find something that matches, it can ask for clarification.

The pillars are the quality of Gemini and the tools we've made available to Gemini and Looker, and grounding the answers. That grounding piece has been the biggest thing in getting these products adopted. It's one thing to have a chatbot and another to have something you can show to your business stakeholders.

What are some examples of how customers are using Looker to help build generative AI tools?

Bailis: I can't talk about customers by name, but we have three key use cases we're seeing.

With the Conversational Analytics capabilities, we have folks who are taking questions from their executives, and instead of doing a deep dive, they're able to pull up the Conversational Analytics UX and they're able to answer those questions quickly and respond. They're even able to create a sidebar report and share that with their leaders. It lets them go live in terms of follow-up questions and deep dives.

Another thing we're seeing is much more API usage with customers wanting to build embedded applications with, for example, an in-store associate being able to access essentially a customized version of Conversational Analytics. They're able to have a chat-based experience grounded in their data that allows them to ask and answer questions about customers as the customers are there. That's really compelling because it's beyond the traditional BI user. It's going to frontline workers.

The third one concerns more pedestrian features like Slide Generation. Hearing the amount of time that's being saved has been very exciting.

You mentioned how smoothly Looker and Gemini, which both are Google Cloud Platform (GCP) entities, work together. Five years after being acquired by Google, does Looker work well with data management and AI development tools from other vendors, or has it become essentially a BI platform just for Google customers?

Bailis: BI has always been and always will be a multi-warehouse game. We aspire to be in every data warehouse and every database, running on Microsoft SQL Server, for example. We also deploy on multiple clouds -- Azure, AWS, GCP. We've made significant investments in GCP, but we're absolutely a multi-cloud, multi-warehouse platform.

For GenAI functionality, we really do rely on some of the features of the Gemini model that are unique to Gemini. For example, Gemini has the largest context window, which means I could put the entirety of Looker's documentation in the context window if I wanted to. When we want to provide an answer, we can put a ton of information about what that user cares about -- the semantic model, the behavior of the user, recommendations -- in the context window. We really leverage that unique capability and would have a lower-quality GenAI experience if we did not build on top of Gemini.

Could customers conceivably use other language models to develop generative AI tools if they prefer them?

Bailis: There are some open source extensions that users have built on top of the Looker platform. The Looker API is alive and well, and people have built third-party GenAI-plus-Looker experiences.

We've spoken about Looker's past and present as it relates to generative AI, so what is the future?

Bailis: I came to Google about a year ago because I think Google is the only hyperscaler with an awesome data warehouse in BigQuery, an awesome BI stack with Looker and Looker Studio, and frontier models. Good quality answers come from great reasoning capabilities and context, which we've got with Gemini. When you think about next-generation GenAI capabilities, it's no surprise that everyone is thinking about how to do more and more advanced reasoning, how to go from doing two things at once to three things at once, how to have that agent go longer and longer without coming back to the user for clarification. It's resolving ambiguity, learning over time, building a history, building a memory.

Our roadmap is very much around those two pillars I shared before -- awesome quality answers with more sophisticated question-and-answer capabilities and continuing to build on that foundation of trust. The need for trust is not going to go away. As these models become more powerful, tying [responses] back to a source of truth becomes more and more important over time.

Lastly, as enterprise interest in AI continues to grow, what will BI platforms such as Looker look like in a few years?

Bailis: With GenAI, business intelligence is finally poised to live up to its name -- intelligence for the business.

For the last 20 years, it's just meant the same dashboards and reports. With GenAI, there is the capability to take really messy business contexts and get precise data-driven answers. Looker's role in this is being the trusted foundation with LookML. It enables GenAI to reason on top of organizational data.

The front end will be a completely different experience with a broader set of users.

Eric Avidon is a senior news writer for TechTarget Editorial and a journalist with more than 25 years of experience. He covers analytics and data management.

Dig Deeper on Business intelligence technology

Search Data Management
Search AWS
Search Content Management
Search Oracle
Search SAP
Close