Getty Images
JFrog buy bolsters MLOps combo with DevSecOps
JFrog plans to meld AI/ML development with established DevSecOps pipelines through the acquisition of Qwak in a bid to help more enterprise AI apps reach production.
JFrog is digging deeper into the niche it began to carve out in MLOps last year, acquiring its MLOps partner in a bid to solidify the bond between AI model development and DevSecOps pipelines.
JFrog started down this path in September 2023 with ML Model Management, a feature for its Software Supply Chain Platform that scans AI/ML models for malicious code, treating them similarly to other types of software packages. It also added support for hosting open source Hugging Face generative AI (GenAI) models so that they can be scanned and managed alongside other software artifacts in DevSecOps pipelines.
In February, JFrog partnered with Qwak, an Israel-based machine learning operations (MLOps) startup, to integrate JFrog's Artifactory and Xray software artifact management products with Qwak's platform for building, training and deploying AI models and apps. Now, JFrog will acquire Qwak for an undisclosed amount, with plans to integrate its AI model training and serving workflows throughout JFrog's DevSecOps product line.
"We basically saw customers embedding more and more machine learning into typical software supply chains," said Yoav Landman, CTO and co-founder of JFrog. "It's actually gotten a very substantial push with GenAI."
These enterprises see similar rates of malicious code popping up in open source AI models as they did in other open source package repositories, according to Landman. That has them looking for similar ways to scan and filter AI models.
Buying Qwak will take JFrog several steps further into blending the management of existing software artifacts with newer AI models and applications, he said.
"What [Qwak is] delivering is the ability to get a model to runtime in a very, very simple way," he said. "The ability to have a model that is integrated with the rest of the applications that surround it and put it in production in a very simple way is actually very challenging. Many projects fail to reach production because the pipelines are siloed -- [AI] still needs to be part of the of the DevOps loop."
GenAI failures prompt fresh vendor pitches
Industry research indicates a high failure rate for enterprise AI projects, particularly in generative AI, where models are still prone to producing inaccurate or otherwise poor-quality results, and the costs of training large language models can be exorbitant. A 2023 Gartner survey report found that 52% of enterprise AI projects failed to make it into production. More recently, a SolarWinds survey of nearly 700 IT professionals found that only 38% were very trusting of the quality of data used in training AI models and identified security as the single most significant barrier to AI integration.
JFrog also isn't alone in trying to bring MLOps into the DevSecOps fold. Microsoft introduced several new AI tie-ins and guidance documents for app developers at its Build conference in May. In April, AWS and Google took similar approaches with Amazon Q and Vertex model development services respectively. Other vendors from Docker to GitLab and Red Hat are developing integrations between AI and DevSecOps tools and infrastructure.
Industry analysts expect further consolidation between MLOps and DevSecOps in the form of M&A activity, as well.
"There are quite a few companies in [MLOps] such as Datarobot, Domino, SAS, etc. I track about 40 of them," said Andy Thurai, an analyst at Constellation Research. "But they don't offer any software pipeline management process, so they had to integrate with one the CI/CD companies. … Going forward, I would expect most of the MLOps companies to be taken out, either by CI/CD platforms or by AI platforms."
While not a top priority, some enterprises intend to consolidate MLOps and DevSecOps tools this year, according to an unpublished 2024 "DevOps Practices, Perceptions and Tools" survey by IDC. In that survey, "Bringing MLOps and DevOps together" outranked 11 out of a total of 20 potential priorities, with 10% of 311 respondents choosing that as their top or No. 2 priority.
"This may seem low, but there was a lot of spread on this question. … The top two [options] only garnered 19.9% (automation) and 16.1% (continuous deployment)," said Katie Norton, an IDC analyst who conducted the survey. "This data certainly merits the actions JFrog and others in the DevOps market have taken to integrate MLOps capabilities into traditional DevOps platforms."
Early adopters vs. the AI/ML mainstream
Getting AI/ML apps to production does require integration with DevSecOps pipelines eventually. But one early adopter of generative AI app development said MLOps and DevSecOps aren't completely integrated and likely won't be for security reasons.
"MLOps is used within both research and production projects to manage versions, audit logs, data lineage for GDPR, artifacts and model performance," said Ian Beaver, chief scientist at Verint Systems, a contact center-as-a-service provider in Melville, N.Y. "However, we apply DevSecOps tools/practices only once a research project has been promoted to a production project."
In the research phase, AI engineers work in an isolated lab environment with no Internet access, where MLOps frameworks help manage experimental models and track the results of experiments, Beaver said.
Yoav LandmanCTO and co-founder, JFrog
"In this stage … we have not incorporated DevSecOps, as many of these prototype models may never see the light of day and we want to iterate quickly," he said. "For example, I don't want a researcher to spend time patching Tensorflow vulnerabilities on an experimental model when the final version may end up being built on an AWS Bedrock foundation model and we never end up deploying Tensorflow to production."
Verint already uses multiple self-hosted DevSecOps tools that Beaver declined to name because of the company's unique compliance requirements and because it must deploy AI apps globally in more than 80 languages. But Beaver said tools such as JFrog and Qwak could appeal to companies with a narrower scope for AI projects.
"I could see if there was a project where you knew from the start what model and libraries were going to be deployed, you could incorporate DevSecOps tooling from the beginning," he said. "For example, if you just had a company policy that all GenAI solutions would be built on GPT-4o, it would simplify the research phase and allow for earlier security incorporation."
JFrog and Qwak could also appeal to companies that don't want to centralize AI apps on one of the major cloud vendors, Beaver said.
Industry experts said those descriptions will apply to a significant number of mainstream enterprises still struggling to put AI/ML apps into production.
"The separation between DevSecOps and MLOps is an artificial one that needs to be resolved ASAP," said Torsten Volk, an analyst at TechTarget's Enterprise Strategy Group. "Enterprises and software vendors will ultimately figure out how to optimally sync both types of processes. … This may take a while, but the payout, especially for large organizations, will be large enough to be worth it."
JFrog's artifact repositories could make for a compelling center of control, even if MLOps and DevSecOps remain at least partially separate processes for IT teams, said Jim Mercer, an analyst at IDC.
"In the rush to GenAI, people generally stayed in their silos. And this is part of the reason why initiatives stall because there is insufficient coordination and different understandings of truth," Mercer said. "While there may be parallel paths, the repository needs to be that common source of truth, as developers are increasingly getting involved with model development and integrating GenAI capabilities into applications."
Beth Pariseau, senior news writer for TechTarget Editorial, is an award-winning veteran of IT journalism covering DevOps. Have a tip? Email her or reach out @PariseauTT.