Getty Images/iStockphoto

Snowflake partners with Anthropic to improve AI development

The alliance aims to make it easier and faster for the data cloud vendor's customers to use the Claude line of large language models when developing advanced applications.

Eight months after a CEO change marked a newfound commitment to fostering AI development, Snowflake on Wednesday continued its aggressive push to enable customers to develop trusted AI tools by unveiling a multiyear partnership with Anthropic.

Based in San Francisco, Anthropic is an AI startup founded in 2021 that develops the Claude line of large language models (LLMs).

Snowflake, meanwhile, is a data platform vendor based in Bozeman, Mont., but with no central headquarters. It specialized in data management before making AI development a focal point as well over the past couple of years.

However, despite unveiling generative AI capabilities of its own and features aimed at enabling customers to build their own AI tools as far back as June 2023, Snowflake did not embrace AI development with the same vigor as rival Databricks and tech giants such as Google and Microsoft.

That has changed since late February when CEO Frank Slootman, who guided Snowflake through a record-breaking initial public stock offering in September 2020, stepped down and Sridhar Ramaswamy was named his successor.

Just days after Ramaswamy became CEO, Snowflake unveiled a partnership with Mistral AI to provide access to Mistral's line of LLMs. Since then, the vendor has unveiled tools to enable customers to develop chatbots and containerization capabilities for deploying AI models and applications, among other features to aid AI development.

Now, its partnership with Anthropic further advances its AI development capabilities and brings them more in line with those of its competition, according to Donald Farmer, founder and principal of TreeHive Strategy.

"Snowflake appears to be catching up," he said. "They can build on their solid data warehousing foundation."

However, there's more to be done before Snowflake has fully caught up, Farmer continued.

Databricks has more mature AI capabilities and has made features generally available faster than Snowflake, he said. AWS offers a wider range of models through the Bedrock platform than Snowflake does through its service, Google Cloud has more extensive AI development tools via Vertex AI, and Microsoft's partnership with OpenAI might be more attractive to customers than Snowflake's with Anthropic, Farmer added.

Likewise, Andy Thurai, an analyst at Constellation Research, noted that Snowflake lost ground to competitors when enterprise interest in generative AI first surged. Recent product developments and integrations have helped it close the gap, but a disparity remains.

"Snowflake has been losing its mojo to Databricks and other [competitors] for the past couple of years on the AI front, [and] they still have to catch up with the dominant player in this area -- Databricks," Thurai said. "This is a good step, but there's a long way to go."

The introduction of Snowflake's new partnership with Anthropic comes just a day after Microsoft unveiled a spate of new capabilities aimed at helping customers develop AI tools, during its annual Ignite user conference in Chicago.

The partnership

By enabling nontechnical employees to use natural language to query and analyze data, generative AI has the potential to expand the use of analytics beyond a small percentage of technical experts, thus making businesses smarter. In addition, generative AI has the potential to make businesses more efficient by enabling organizations to automate processes that previously needed to be performed by humans.

As a result, enterprise interest in developing generative AI tools that could transform operations has surged in the two years since OpenAI's launch of ChatGPT marked a significant improvement in generative AI technology and began the modern age of generative AI.

To develop generative AI tools that understand their operations, enterprises must combine two basic ingredients: proprietary data and generative AI technology from foundation models.

Data management and analytics vendors have responded en masse due to the need for proprietary data to train models. Most have developed AI tools, such as chatbots and agents, that enable users to interact with their platforms using natural language. Many others, however, have gone further and built environments for their customers to develop their own generative AI tools.

They've added capabilities such as vector search and retrieval-augmented generation so that developers can train models with trusted data. And to provide the actual generative AI -- the second ingredient -- they've added integrations with LLMs such as ChatGPT, Google Gemini and Meta's Llama so that customers can combine their data with the LLM of their choice.

Snowflake's partnership with Anthropic addresses that second part of the generative AI development recipe, providing access to LLMs.

Under the partnership, Anthropic's Claude 3.5 models are available in Cortex AI, where they can be used without developers having to move data out of Snowflake's secure environment and risk the data's accidental exposure. With Anthropic's models available in Cortex AI, it gives users more LLMs to choose from as they build their enterprise's generative AI tools, joining models from Google, Meta and Mistral, among others.

The partnership, however, goes deeper than simply providing Snowflake customers with more choice as they develop generative AI models and applications.

Snowflake will now use Anthropic's Claude as one of the models powering the semiautonomous and autonomous generative AI agents it provides to customers. The prebuilt agents, which can be customized with an enterprise's proprietary data, will be optimized for Claude from the outset.

The partnership is highly significant for Snowflake customers. It brings Claude directly into Snowflake's AI Data Cloud while maintaining their security-first approach. This is a good approach, enabling processing of a customer's entire knowledge base within a secure environment.
Donald FarmerFounder and principal, TreeHive Strategy

With the added security provided by the partnership and tighter alignment than an integration would represent, Snowflake's agreement with Anthropic is valuable, according to Farmer.

"The partnership is highly significant for Snowflake customers," he said. "It brings Claude directly into Snowflake's AI Data Cloud while maintaining their security-first approach. This is a good approach, enabling processing of a customer's entire knowledge base within a secure environment."

Mike Leone, an analyst at TechTarget's Enterprise Strategy Group, likewise said the partnership is important for Snowflake customers.

Anthropic's Claude models are among the highest-performing LLMs, he noted. Optimizing prebuilt agents for Claude and providing access to Claude through Cortex AI is therefore important.

"This is a huge win for Snowflake customers as they look to leverage the power of generative AI underpinned by trusted LLMs like Anthropic's latest, market-leading Claude models," Leone said.

In addition, the nature of the relationship between Snowflake and Anthropic carries weight, he continued. The partnership, unlike a common connector, enables Snowflake customers to combine their data with generative AI without having to move it away from the security and governance measures of the Snowflake Data Cloud.

"Knowing that governance, security, privacy and compliance are so critical -- and often seen as challenges to quickly see value from generative AI -- Snowflake is enabling customers to extend their trust and safety frameworks to encompass Claude," Leone said.

As for choosing to optimize its agents for Claude rather than another LLM, Snowflake is aligning itself with some of the best LLMs, according to Thurai.

Like Leone, he noted that Claude is doing well in benchmark testing, though he added that benchmark testing of LLMs should be taken with skepticism because LLMs from the big generative AI vendors continually leapfrog each other in size, capabilities and other attributes.

"Based on my conversations with enterprises, Anthropic seems to be the best-performing model among the LLMs to date," Thurai said.

Anthropic was founded by seven former OpenAI employees, including some of the leaders of OpenAI's research, safety and policy initiatives who left because they didn't like the way OpenAI was building its models, he noted.

"They promised to build something better," Thurai said. "And they've been able to build a similar or better model family. ... The results seem to be equal or better than other models and at a much cheaper cost. As of now, their Claude 3.5 Sonnet is one of the top-performing models in the market."

Snowflake's partnership with Anthropic is its seventh with an LLM provider, including Mistral in March and Meta, developer of the Llama line of LLMs, in July.

However, Snowflake's partnership with Anthropic is different from the others, according to Harshal Pimpalkhute, Snowflake's principal product manager. Unlike its partnerships with other LLM providers, which simply provided Snowflake customers with access to an array of LLMs through integrations, Snowflake's partnership with Anthropic includes Snowflake's use of Claude in its own agentic tools.

Pimpalkhute added that as Snowflake partners with LLM providers, it follows strict guidelines.

"Our goal is to make generative AI easy, efficient and trusted for our users," he said. "This means prioritizing models that offer high performance and efficiency, while optimizing costs for our customers. As the AI landscape rapidly advances, we are committed to bringing the best ... models to Cortex AI."

Next steps

Following the formation of its partnership with Anthropic, Snowflake would be wise to continue adding partnerships with AI developers to provide its customers with more choices as they develop AI tools, according to Leone. Some LLMs are better at certain things than others, so given the application, one LLM might be more appropriate than another.

"Organizations want flexibility in what models they can choose to support their use cases, so expanding their model provider partnerships continues to be a big deal for Snowflake," Leone said.

Farmer likewise suggested that Snowflake should enable more model choice as it builds its AI development environment.

"[Snowflake should] continue adding diverse model options beyond current partnerships," he said. "The range of models is becoming as important as the depth of integration with any one model."

In addition, Farmer stressed that Snowflake needs to make features generally available more quickly. Typically, the vendor takes at least six months between introducing capabilities and making them generally available.

For example, Cortex AI was first unveiled in November 2023, and access to LLMs on Cortex was made generally available in May 2024. Meanwhile, other capabilities such as Snowpark Container Services, which enables secure deployment of models and applications, took 14 months to go from preview to general availability.

"Move faster," Farmer said, regarding what Snowflake should do next to better enable AI development. "[It needs to] move more AI capabilities from preview to GA to match competitors."

Eric Avidon is a senior news writer for TechTarget Editorial and a journalist with more than 25 years of experience. He covers analytics and data management.

Next Steps

Amazon invests additional $4 billion in Anthropic

Dig Deeper on Data management strategies