Nabugu - stock.adobe.com

Nexla integration with Nvidia NIM boosts AI development

The complementary partnership combines the data integration vendor's discovery and retrieval capabilities with the tech giant's power to scale and secure AI development.

Nexla on Tuesday unveiled an integration with Nvidia NIM aimed at enabling customers to more easily develop and deploy AI models and applications.

Nvidia NIM is a microservices-based AI development suite from Nvidia that includes integrations with generative AI models from Mistral and Meta, among others. It also includes blueprints for building AI-powered agents and other AI applications.

Nexla, meanwhile, is a data integration vendor whose platform enables customers to combine data from disparate sources, including both structured and unstructured data. Like many data management vendors, Nexla has recently added capabilities aimed at enabling AI development. Some were developed in-house, while others are the result of partnerships with vendors ranging from tech giants AWS and Google to vector database specialist Pinecone.

Now, Nexla is adding a partnership with Nvidia to further improve its AI development suite while providing Nvidia users access to its data preparation and discovery capabilities.

Through the integration -- which comes at an undisclosed extra cost for both Nexla and Nvidia customers -- Nexla users will have access to Nvidia capabilities that enable them to scale and secure their AI development beyond what they could do with Nexla alone. For example, they will have access to GPU containers and fast vector embedding generation. Nvidia users, meanwhile, will have access to data discovery capabilities from Nexla that are critical to finding the relevant, quality data required to develop successful AI tools.

Given that Nexla's platform and Nvidia NIM complement one another, the integration is significant, according to Kevin Petrie, an analyst at BARC U.S.

"Nexla's partnership with Nvidia opens up critical opportunities for adopters of generative AI," he said. "It helps companies reduce the complexity of transforming their unstructured data into generative AI-ready inputs."

Nexla is a 2016 startup based in San Mateo, Calif. The company's primary competitors include fellow data integration specialists such as Fivetran and Informatica as well as tech giants that provide integration capabilities, such as AWS.

The integration

Enterprise interest and investment in developing generative AI-powered applications has surged in the two years since OpenAI's launch of ChatGPT. Getting generative AI-powered tools from the idea stage through development and into production, however, is not easy.

It takes careful data curation and preparation. In addition, it takes integrating numerous capabilities. Even with the right tools, getting the accuracy of models and applications to the point that they can be trusted is difficult.

Nexla's partnership with Nvidia opens up critical opportunities for adopters of generative AI. It helps companies reduce the complexity of transforming their unstructured data into generative AI-ready inputs.
Kevin PetrieAnalyst, BARC U.S.

The integration between Nexla and Nvidia NIM is designed to make AI development easier, according to Saket Saurabh, Nexla's co-founder and CEO.

"The main problem we're trying to help solve is how to help enterprises get their ideas around generative AI into production," he said. "That means not just a demo but something that can run at scale and is reliable."

Nexla already provides many of the tools needed to enable AI development, which are accessible through a natural language interface. But it doesn't have the compute power and security Nvidia can provide. Nvidia NIM, meanwhile, includes all the features that enable AI development, but its data discovery and retrieval capabilities fall behind those of a data integration specialist.

"It's a good mutual partnership where Nexla makes it possible to do things faster and easier, and if you want advanced technology, Nvidia brings that," Saurabh said.

Specific benefits of the integration include the following:

  • Accelerated document parsing and generation of vector embeddings powered by Nvidia.
  • GPU-powered containers provided by Nvidia that make deployments more secure.
  • Flexible AI workflow deployment options provided by Nvidia, including public cloud, private cloud and on-premises.
  • Agentic AI-powered retrieval of data sets from Nexla that uses metadata to identify relevant data for informing an AI application.
  • Retrieval-augmented generation (RAG) pipelines from Nexla capable of handling large volumes of data, both structured and unstructured, that include assessment modules that test data to ensure its relevancy.

RAG pipelines, which are the key link between an enterprise's proprietary data and LLMs that provide generative AI technology, are difficult to develop, according to Petrie. But because they effectively retrieve relevant data that can reduce AI hallucinations, they are the preferred connective tissue for many enterprises.

As a result, the combination of AI-powered retrieval capabilities from Nexla with features from Nvidia NIM, such as accelerated vector-embedding generation and simplified deployment options, is valuable.

"Nexla handles structured tables and unstructured documents, both of which provide critical inputs for RAG and generative AI," Petrie said. "This integration helps companies reduce the training, integration complexity and number of tools required as they tackle RAG."

Future plans

With the integration now available, Nexla's roadmap is focused on developing and improving its generative AI capabilities, including its chat-based interface, to make it easier for customers to integrate data and develop AI applications, according to Saurabh. In addition, the vendor plans to develop agentic AI capabilities that act autonomously to relieve humans of certain mundane tasks.

"Ultimately, our focus is helping people bring [AI development] to production, which means having governance, monitoring, security and an easier interface so more people can approach the technology," Saurabh said.

Petrie, meanwhile, suggested Nexla expand its model operations capabilities.

AI development, including RAG, depends on the combination of data operations, development operations and model operations, he noted. Nexla's historical focus has been on data operations, while the partnership with Nvidia strengthens its development and model operations capabilities. Still, there is more that could be done.

"I'll be interested to see how Nexla can contribute more to the ModelOps lifecycle in the future," Petrie said. "For example, it could extend its partnership with Databricks to assist RAG workflows."

Eric Avidon is a senior news writer for TechTarget Editorial and a journalist with more than 25 years of experience. He covers analytics and data management.

Dig Deeper on Data management strategies