Software supply chain security AI agents take action
Three software supply chain security vendors join the AI agent trend that is sweeping tech, as AI-generated code threatens to overwhelm human security pros.
Software supply chain security tools from multiple vendors moved from software vulnerability detection to proactive vulnerability fixes with new AI agents released this week.
AI agents are autonomous software entities backed by large language models (LLMs) that can act on natural language prompts or event triggers within an environment, such as software pull requests. As LLM-generated code from AI assistants and agents such as GitHub Copilot flood enterprise software development pipelines, analysts say they represent a fresh threat to enterprise software supply chain security by their sheer volume.
"When you have developers using AI, there will be a scale issue where security teams just can't keep up," said Melinda Marks, an analyst at Enterprise Strategy Group, now part of Omdia. "Every AppSec [application security] vendor is looking at AI from the standpoint of, 'How do we support developers using AI?' and then, 'How do we apply AI to help the security teams?' We have to have both."
Endor Labs AI agents perform code reviews
Endor Labs began in the software supply chain security market by focusing on detecting, prioritizing and remediating open source software vulnerabilities. However, its CEO and co-founder, Varun Badhwar, said AI-generated code is now poised to overtake open source as the primary ingredient in enterprise software.
"AI creates code based on previous software, but the average customer ends up with three to five times more code created, swarming developers with even more problems," Badhwar said. "And most AI-generated code has vulnerabilities."
Endor plans to ship its first set of AI agents next month under a new feature called AI Security Code Review. The feature comprises three agents trained using Endor's static call graph to act as a developer, security architect and app security engineer. These agents will automatically review every code pull request in systems such as GitHub Copilot, Visual Studio Code and Cursor via a Model Context Protocol (MCP) server.
According to Bardhwar, Endor's agents look for architectural flaws that attackers could exploit, taking a wider view than built-in code-level security tools such as GitHub Copilot Autofix. Such flaws could include adding AI systems that are vulnerable to prompt injection, introducing new public API endpoints and changing authentication, authorization, cryptography, or sensitive data handling mechanisms. The agents then surface their findings and prioritize them according to their reachability and impact, with recommended fixes.
Existing Endor customers said the AI agents show promise that could help security teams move faster and disrupt developers less.
"Gone are the days where I would say [to an AppSec tool], 'Show me all the red blinking lights,' and it's all red," said Aman Sirohi, senior vice president of platform infrastructure and chief security officer at sales AI data platform company People.ai, which started using Endor Labs about six months ago and has beta tested the new AI agents.
"Is the vulnerability reachable in my environment?" Sirohi said. "And don't give me a tool that I cannot [use to address] the vulnerability … one of the great things that Endor has done is use LLMs to explain the vulnerability in plain English."
AI Security Code Review helps application security pros clearly explain vulnerabilities and how to fix them to their developer counterparts without going to Google for research, Sirohi said. Reading the natural language vulnerability summaries has given him a better perspective on patterns of vulnerabilities that should be proactively addressed across teams, he said.
Another Endor Labs user said he's keen to try the new AI Security Code Review.
"It's imperative to use tools that are closest to developers when they write code," said Pathik Patel, head of cloud security at data management vendor Informatica. "This tooling will eliminate many vulnerabilities at the source itself and dig into architectural problems. This is good functionality that will grow and be useful."
Lineaje AI agents autofix code, containers
Lineaje started in software supply chain vulnerability and dependency analysis, supporting automation bots and using AI to prioritize and recommend vulnerability remediations.
Melinda Marks
Lineaje rolled out AI agents this week that autonomously find and fix software supply chain security risks in source code and containers. According to a company press release, the AI agents can speed up tasks such as comparing code versions, generating reports, analyzing and searching code repositories, and performing compatibility analysis at high scale.
Lineaje also shipped golden open source packages and container images this week, along with updates to its source code analysis (SCA) tool that don't require AI agents. According to Marks, this is potentially a wise move, as trust in AI remains limited among enterprises.
"There's going to be a comfort-level adjustment, because there are AppSec teams who still need to see everything and do everything [themselves]," she said. "This has been a challenge from the beginning, with cloud-native development and traditional security teams."
Cycode AI agents analyze risks
Another non-agentic software supply chain security update from AppSec platform vendor Cycode this week added runtime memory protection for CI/CD pipelines via its Cimon project. Cimon already prevented malicious code from running in software development systems using eBPF-based kernel monitoring. This week's new memory protection module prevents malicious processes from harvesting secrets from memory during CI builds, as happened during a GitHub Actions supply chain attack in March.
Cycode also rolled out a set of "AI teammates" including a change impact analysis agent that proactively analyzes code changes to detect changes to risk posture. Another exploitability agent distinguishes reachable vulnerabilities that may be buried in code scan results; a fix and remediation agent proposes code changes to address risk; and a risk intelligence graph agent can answer questions about risk across code repositories, build workflows, secrets, dependencies and clouds. Cycode agents support connections to third-party tools using MCP.
Cycode and Endor Labs have previously taken different approaches to AppSec, but according to Marks, this week's updates increase the overlap between them as the software supply chain security and application security posture management (ASPM) markets converge.
"Software supply chain security has evolved from just source code scanning for open source or third-party software to tying this stuff all together with ASPM," Marks said. "For a while, it was just [software bills of materials] SBOMs and SCA tools, but now software supply chain security is becoming a bigger part of AppSec in general."
Who watches the watchers?
The time crunch that AI-generated code represents for security operations teams will likely be a strong persuader to adopt AI agents, but enterprises must also be cautious about how agents access their environments, said Katie Norton, an analyst at IDC.
Organizations leaning into AI need to treat these agents not just as productivity boosters, but as potential supply chain participants.
Katie NortonAnalyst, IDC
"This makes technologies like runtime attestation, policy enforcement engines and guardrails for code generation more important than ever," she said. "Organizations leaning into AI need to treat these agents not just as productivity boosters, but as potential supply chain participants that must be governed, monitored and secured just like any third-party dependency or CI/CD integration."
Endor Labs agents review code but don't generate it, a company spokesperson said. Users can govern the new AI agents with the same role-based access controls they use with the existing product. A Lineaje spokesperson said it provides provenance and verification for its agent-generated code. Cycode has not answered questions about how it secures AI agents at press time.
MCP also remains subject to open security questions -- the early-stage standard doesn't have its own access control framework. For now, that's being provided by third-party identity and access management providers. Badhwar said Endor does not manage access control for MCP.
Informatica's Patel said he's looking for a comprehensive security framework for MCP rather than individual vendors to shore up MCP server access piecemeal.
"I don't see tools stitched on top of old systems as tools for MCP," he said. "I really want an end-to-end system that can track and monitor all of my MCP infrastructure."
Beth Pariseau, a senior news writer for Informa TechTarget, is an award-winning veteran of IT journalism covering DevOps. Have a tip? Email her or reach out @PariseauTT.