your123 - stock.adobe.com
Salesforce moves forward with secure, private generative AI
With its new AI Cloud and built-in Einstein GPT Trust Layer, The CRM and CX vendor is investing in generative AI technology and adding data security and privacy features.
NEW YORK -- Salesforce says it is serious about generative AI.
The CRM and CX giant introduced its latest generative AI services with a substantial emphasis on data safety at a one-day event in New York City on June 12 featuring a rare media appearance by Marc Benioff, the vendor's CEO, chairman and co-founder.
"Customers have the ability to use generative AI without sacrificing their data privacy and data security," Benioff said during a keynote, noting that Salesforce started using AI intensively in 2016 when it launched its first trust model for predictive AI. "This is critical for each and every one of our customers all over the world."
AI Cloud and data safety
Nearly a decade ago, Salesforce used machine intelligence to provide predictions without looking at the data, he said. Now, with the vendor's Einstein GPT Trust Layer, part of the new AI Cloud service, generative AI can handle data safely, Benioff said.
With AI Cloud, part of an arsenal of application and industry-specific cloud platforms, Salesforce appears to be targeting not only a host of other cloud CRM and CX competitors but also cloud and AI giants Microsoft, Google and AWS.
"Salesforce must offer AI tools that serve both advanced users who require flexibility -- such as bringing their own custom-trained large language model [LLM] for customer service -- and also those who want pre-built tools they can just turn on and start infusing generative AI into their various workflows right away," said Don Fluckinger, an analyst at TechTarget's Enterprise Strategy Group. "AI Cloud is Salesforce's attempt to cater to that wide spectrum of users in various states of AI adoption and competency."
Using generative AI securely
At the Salesforce AI Day event on June 12, Julie Sweet, CEO and chair of Accenture, the systems integrator -- which unveiled plans last month to help Salesforce customers install generative AI-backed CRM systems -- said trust is important for customers using generative AI, which is still in its infancy and subject to chronic problems such as inaccuracy.
Generative AI technology based on LLMs that create text, images and audio from human prompts -- including essays, computer code and artworks -- has grown explosively since Microsoft partner OpenAI released the widely popular ChatGPT AI chatbot last November. Google, in turn, launched Bard, its generative AI assistant, in March.
But both generative AI systems, as well as others, have been plagued by what have recently come to be known as "hallucinations" -- incidents when AI chatbots invent, distort and inappropriately appropriate information, sometimes producing offensive or irrational responses.
Accenture has a compliance program overseen by its board's audit committee. Enterprises should use AI safety experts and protocols to securely start using generative AI, Sweet said.
"If, when you leave this room, you are not able to pick up the phone, call someone in your company and have them tell you where the AI is being used … you do not yet have responsible AI," Sweet said. "What are the risks? How are they being mitigated? How are they monitored? And who's accountable?"
Key among the capabilities of the Einstein GPT Trust Layer is how it helps reduce the number of AI hallucinations, according to Patrick Stokes, Salesforce EVP of product and industries marketing.
Hallucinations occur when an LLM does not have the adequate information to accurately answer the user's question, Stokes said. In that case, it puts out an answer to the best of its ability, which may be false.
As more data is introduced to the LLM, the hallucination curve lowers, according to Stokes. He said safely bringing in the data is key, and the Einstein GPT Trust Layer supports this mission.
"We're able to create this separation where your business data -- your most important asset -- is stored in a place where we can have selective control over how the data is used," Stokes said.
AI trust
Marc Benioff, CEO, chairman and co-founderSalesforce
Technology such as the Salesforce trust layer is one of the hot trends in generative AI now, Fluckinger noted.
"When tech vendors began releasing AI tools en masse more than a decade ago, they used words like 'efficiency' and 'workflow automation' to woo users," he said. "It's interesting that many -- including Salesforce -- now focus on trust and transparency first before they start pitching other potential benefits. Considering the headlines around AI right now, that's probably a smart move."
Salesforce Ventures, the vendor's venture capital arm, also doubled its Generative AI Fund from $250 million to $500 million to support the AI startup ecosystem and safe generative AI.
Mary Reines is a news writer covering customer experience and unified communications for TechTarget Editorial. Before TechTarget, Reines was arts editor at the Marblehead Reporter.