Getty Images

Generative AI in CX promises benefits, but obstacles remain

Generative AI promises to boost efficiency in CX, but businesses face adoption and compliance hurdles.

CX technology vendors always speed ahead of their customers when it comes to the latest and greatest; there's usually a gap between what they've released and what customers have adopted. But with generative AI, the gap seems as big as the ocean -- or at least one of the Great Lakes.

Even vendor research reports, known for putting a positive shine on survey data, acknowledge it. More than a quarter of the 5,500 business respondents internationally surveyed by Zendesk for its CX Trends 2025 report released this month have delayed deployment of AI "due to a lack of budget, knowledge, or internal support."

More than half of 2,300 senior IT decision-makers have not yet aligned their generative AI (GenAI) strategies with business needs, according to an independent survey conducted in October sponsored by consultant NTT Data. Only 39% of respondents reported "significant" generative AI investment so far, with 60% of the 2,300 respondents saying they've made "some investment," or are in pilot, trial and assessment phases -- some of it funded by third parties, some of it self-funded. One percent said they have no plans to invest in GenAI at all.

Yet it's coming. Companies like Salesforce and HubSpot are not only rolling out generative AI features at a furious clip, but they're also investing in AI startups to augment their platforms and bolster the health of prospective future acquisitions.

In CX, early adopters have found a home for generative AI in customer service and contact centers. Generative AI-powered bots -- called "agents" when paired with reasoning, or logic, engines that can make simple decisions autonomously -- have the potential for these applications because they aren't creating anything; they're just finding and summarizing answers more efficiently than AI predecessors.

CX agents, for all their potential, typically require much implementation work beyond switching them on. Users must not only define the roles and processes the AI agents will fill, but they also must get their customer data and knowledge repositories in order -- which is a tall order for many IT environments.

"We're actually using generative AI with our clients in a different way -- not putting it in front of their end customers, but pointing it internally at their own knowledge, both structured and unstructured," said John Seeds, chief marketing officer at global CX consultancy TTEC Digital, which counts some large contact centers among its clients. "To start, we say 'Hey, you've got some duplication here. You've got some inconsistencies there,' and go through this massive amount of data, clean it up, right? And then clean it up on an ongoing basis with either our managed services or tools or software.

"You then have a way that you can present that data in a more effective way externally, to reduce the number of inbound calls, reduce the number of dependencies on conflicting data, and really drive self-service in a more meaningful way. [That's] always critical in the contact center," Seeds said.

Getting one's data in order is also the key to launching GenAI tools for marketing and e-commerce teams. Among others, Google, Salesforce and Sitecore have released AI that can conjure content that might not represent a finished campaign or website, but creatives can get a good start with new ideas from generative AI -- some of them more fully formed than others.

ChatGPT launched a revolution

CX has been steeped in AI for analytics, voice recognition and natural language processing for years, especially in contact centers. When paired with rules-based automation, IT could use AI to read emails, for example, and route messages to an agent with the right expertise.

But when GenAI came along in the form of ChatGPT in late 2022, it was different, said John Ball, ServiceNow senior vice president and general manager of customer and industry workflow products.

"I thought, 'Wow, this GenAI capability is going to be amazing, because stuff that I was dreaming up five years ago will actually work now,'" Ball said. "You can do chat reply recommendations. You can do email recommendations because you don't have to handcraft every dialog. You don't have to handcraft the intent modeling and the entity modeling. That's what the LLM does."

Clara Shih, who recently joined Meta, said that when she was CEO of AI at Salesforce, she couldn't grasp generative AI when it was unveiled. She saw it in one of the first in-person meetings by Silvio Savarese, a Stanford professor and Salesforce executive vice president and chief scientist.

What we were surprised to see, actually, was the huge adoption [of GenAI]. It was incredible to see … how all of a sudden, the entire planet was using it.
Silvio SavareseExecutive vice president and chief scientist, Salesforce

"I remember everyone was really nervous because this was one of the first meetings we were taking post-COVID. Everybody was wearing their mask," Shih said. "I presented my section, and then Silvio started to explain what a large language model was. And I asked myself, are you kidding me? Is this science fiction? I actually didn't believe him. But then he got up there, and he showed us. It was insane."

Jayesh Govindarajan, executive vice president of Salesforce AI, was privy to lab previews of OpenAI's ChatGPT before it was released. While early previews "looked like a nice toy," he said, a month later, pre-release previews "just blew my mind -- I was struck by the rate of change."

For his part, career AI researcher Savarese saw GenAI coming from a long way off.

"What we were surprised to see, actually, was the huge adoption," Savarese said. "It was incredible to see that, that how all of a sudden, the entire planet was using it."

Autonomy an open question

So many questions revolve around AI autonomy: AI agents can cut costs, don't call in sick and are available to work 24-7 year-round, but can a bot be trusted to keep a customer happy? What if a bot makes a bad decision either in the company's favor or the customer's? How can guardrails rein in bots whose training takes them out of their lane? Do customers even want them?

GenAI costs graphic
It's not just the cloud license costs that GenAI brings to CX.

At this point, the questions outnumber the answers. Trimedx, based in Indianapolis, manages clinical hardware for 6,000 hospitals. It plans to deploy generative AI in ServiceNow to help fill out reports that humans now complete for its 2.5 million devices out in the field and the work orders that procure them. If technology can save a few minutes times 2.5 million, said CIO Brad Jobe, that would be an attractive ROI on the technology.

Jobe added that he sees much potential in this application of GenAI, which will be rolled out in the next two quarters. Future plans include piping educational content on checklists, parts lists and troubleshooting guides through generative AI. The purpose is to accelerate the learning curve for young clinical engineers who are replacing the older, experienced ones in the field.

Agentic autonomy in the clinical environment is much further off, Jobe said.

"I've talked to a lot of folks who believe that will happen," Jobe said. "This is the new shiny technology that's out there, and are there opportunities? Yes, I think so. Do we get to a point where we're taking the decisions out of the clinician's hands? That's a little bit of a stretch. The thing right now is, can we provide information to help them?"

Another large ServiceNow customer, manufacturing and industrial engineering juggernaut Siemens, developed its own "bionic agent" on Microsoft Azure to use internally. The company is moving toward deploying it to external customers as well. The agent can manage tasks in supply chain procurement, employee benefits management in HR and payroll.

Deployed as a shared service, the Siemens agent was launched before GenAI became available. GenAI training has sped up development cycles allowing the company to customize it for different roles and use cases, said Matthias Egelhaaf, head of digital solutions at Siemens AG.

Not only does generative AI increase productivity in documentation processes and solve problems for Siemens, but it also does so more accurately than humans performing the same tasks, he said. Egelhaaf is ready for agent autonomy whenever and wherever it makes sense.

"It brings productivity like hell," he said.

Regulations and copyright concerns

Upcoming regulations will add to the questions surrounding generative AI agents in customer service applications. CX leaders will likely have to account for compliance in the business processes they design and the technology to support those processes.

The Biden administration has coordinated a multi-agency effort to improve CX in customer service, marketing and advertising and, this week, outlawed "junk fees" for hotels, vacation rental platforms and concert ticket brokers.

These CX rules typically have bipartisan support in Congress, although incoming Federal Trade Commission head Andrew Ferguson was in a minority of commissioners voting against Click to Cancel, which would make ending subscriptions such as gym memberships or streaming media as easy as it is to sign up. Autonomous agents could make clicking to cancel -- and executing it -- very straightforward.

Click to Cancel rule in detail, graphic

States have their own versions of this rule, which are still in play, regardless of how the federal government enforces the FTC Click to Cancel rule, said attorney Alexis Amezcua, a litigation partner at law firm Morrison & Foerster. Compliance will likely be a challenge to large and small companies.

"I see challenges both ways," Amezcua said. "Whether you are a massive, established company with many levels of your tech stack required to fully comply -- or if you are a startup, a mom and pop, three to five employees. You are an entrepreneur. You're not a lawyer. You don't know what it means and need a lawyer to help you comply."

In the private sector, companies are trying to mitigate risk of accidentally appropriating copyrighted material with their AI use. Adobe spearheaded the Content Authenticity Initiative (CAI), which includes a metadata standard, open source tools and other technology to prove an image's provenance and editing history. It was launched in 2019 to develop standards around embedding a trail of data to prove how images were digitally edited with tools such as Photoshop.

When generative AI emerged in 2022, the 4,000 media, hardware and software partners also took on building a means to document how AI generation will be shown in digital content.

The CAI isn't about catching bad guys, said Andy Parsons, senior director of the content authenticity initiative at Adobe. It's more about giving the good guys transparency tools to show where their content came from.

"This is about understanding. [It's about] fulfilling a basic human right to understand what content is," Parsons said. "Whether we have a photograph or a video shot on a recording device, or something that's fully generated or partially generated. Ultimately, when we talk about authenticity, this is about proving you know what's real."

Don Fluckinger is a senior news writer for TechTarget Editorial. He covers customer experience, digital experience management and end-user computing. Got a tip? Email him.

Dig Deeper on CRM tools and strategy