Getty Images/iStockphoto

Generative AI emerges for DevSecOps, with some qualms

New and developing tools use natural language processing to assist DevSecOps workflows, but concerns linger among developers about security risks as well.

The generative AI boom has touched virtually every aspect of IT. Now it's DevSecOps' turn, which has sparked trepidation for some as the technology continues to mature.

Software supply chain security and observability vendors have launched updates and new products in the past two months that add natural language processing interfaces to DevSecOps automation and software bill of materials (SBOM) analysis software. These include startup Lineaje.ai, which rolled out AI bots fronted with generative AI for SBOM analysis, and observability vendors Dynatrace and Aqua Security, which added generative AI-based interfaces to security and vulnerability management tools.

These vendors and others in the software supply chain security market also plan to explore further applications for large language models (LLMs) to assist in secure software delivery and incident management for DevSecOps.

It's no surprise that IT vendors would want to cash in on the generative AI craze that's grown since ChatGPT was publicly released in November 2022. But there are also indications that DevSecOps teams are seeking such features.

According to 800 DevOps and SecOps leaders surveyed by software supply chain security vendor Sonatype in July, 97% are using generative AI already, with 74% reporting they feel pressure to use it.

"Software engineers are using the technology to research libraries and frameworks and write new code, while application security professionals are using it to test and analyze code and identify security issues," according to Sonatype's survey report, released this week. "An overwhelming majority are using generative AI today … an incredible (even historic) rate of adoption and organizational effort to establish processes, though some report feeling pressured to incorporate the technology despite concerns about security risks."

Awareness is growing about the risks and pitfalls involved in generative AI, both in the ways generative AI and other statistical models are deployed in enterprise environments and in its use as a tool to assist practitioners in evaluating software supply chain resources and creating secure code. This week, the Open Source Security Foundation and national cybersecurity leaders met in Washington D.C., to discuss these issues and called for improvement in all areas of AI security.

Generative AI for DevSecOps: promising or Pandora's Box?

While both DevOps and SecOps pros surveyed by Sonatype felt generative AI has been overhyped, "developers take a more cynical view of generative AI than security leads in general," according to the report where 61% of developers said the technology was overhyped, compared to 37% of security leads.

This skepticism was reflected in a virtual roundtable discussion hosted by SBOM vendor Lineaje ahead of its Aug. 2 release of BOMbots, a set of automation bots that deliver optimized recommendations for vulnerability remediations and account for how such changes will affect the rest of a user's IT environment.

It's definitely got a long way to go in terms of maturity, but … it's not like the world is going to wait for it to be superb at software development before [it's] applied.
Michael Machado Chief data and security officer, Shippo

"I'm not very sold on security and generative AI at this point of time," said Chitra Elango, senior director of DevSecOps, vulnerability management and red team at financial services company Fannie Mae, based in Washington D.C., during the discussion. "If something is so exciting, that means it comes with a lot of security [implications] behind the scenes. … It could be used for both positive and negative."

Other roundtable panelists agreed with taking a cautious approach but added that bad actors are already using generative AI. Generative AI will make the volume of software grow exponentially, so defenders should consider how such tools can be helpful.

"It's definitely got a long way to go in terms of maturity, but … it's not like the world is going to wait for it to be superb at software development before [it's] applied," said Michael Machado, chief data and security officer at Shippo, an e-commerce shipping software maker in San Francisco. "On the other hand, a coding assistant that catches security problems … helps scale the skillset [brought to] the work."

Neither Fannie Mae nor Shippo are Lineaje customers, though Elango said in a separate online interview that her company may consider proof-of-concept testing Lineaje in the future.

Lineaje and other vendors that have folded generative AI into their tools also don't think the tech is a panacea. Instead, company execs said they believe it's most helpful in combination with other forms of AI data processing, automation and analytics as a user-friendly interface into data insights.

Javed Hasan, co-founder and CEO, Lineaje.aiJaved Hasan

"We have always collected a tremendous amount of data, since the beginning," Lineaje co-founder and CEO Javed Hasan said in an interview with TechTarget Editorial last month, referring to when the company came out of stealth in March 2022. "Where LLMs come in is they can take all the information we have and make it easy to use. So rather than building your own dashboard, the front end is driven by LLMs simplifying all the data we collect and making it very interactive."

Lineaje software uses crawlers to collect up to 170 attributes on each software component listed in an SBOM, including open source libraries and dependencies, Hasan said. This naturally leads to an overwhelming number of vulnerabilities reported -- thousands, in many cases. Lineaje BOMbots help prioritize these vulnerabilities by assessing how impactful and risky fixing them will be in customers' specific environments. Eventually, more Lineaje bots will automatically carry out fixes with human approval, Hasan said.

Cloud-native security and observability vendors Aqua Security and Dynatrace take a similar stance -- that generative AI will function best as part of a multi-modal approach to AI for DevSecOps teams.

Aqua's AI-Guided Remediation feature, released Aug. 1, uses OpenAI's GPT-4 LLM to speed up how DevSecOps teams interact with underlying data insights to solve problems, said Tsvi Korren, field CTO at Aqua. But it doesn't yet create its own analyses.

"Instead of [teams] going to Google and Stack Overflow and looking things up -- because that's what really happens; nobody knows it all off the top of their head -- we are just providing a shortcut and making sure everybody has the same information," Korren said in an interview.

Generative AI's supply chain security potential

While most commercial SBOM analysis vendors use some form of machine learning or AI models to digest data, most have not added generative AI yet. Instead, recent blog posts from two SBOM vendors warned about the security hazards of generative AI.

"The open-source ecosystem surrounding LLMs lacks the maturity and security posture needed to safeguard these powerful models. As their popularity surges, they become prime targets for attackers," according to a June 28 Rezilion blog post.

Endor Labs researchers published the results of testing LLMs on analyzing open source repositories for malware. Those results were grim, according to a company blog post in July.

"Existing LLM technologies still can't be used to reliably assist in malware detection and scale -- in fact, they accurately classify malware risk in barely 5% of all cases," the blog post reads. "They have value in manual workflows, but will likely never be fully reliable in autonomous workflows."

Henrik Plate, lead security researcher, Endor Labs Station 9Henrik Plate

However, Endor Labs Station 9 lead security researcher Henrik Plate -- Endor's lead tester on the study -- said in an interview that he's still optimistic about the value of generative AI to automate DevSecOps workflows, including the detection and remediation of vulnerable open source dependencies.

"Classical engineering program analysis tools also do a good job. But one very specific other application where we are thinking of extending our program analysis with AI is when you build core graphs [of relationships between software functions]. You need to understand, 'This function -- which other function does it call?'" Plate said. "It sounds easy. But depending on the programming language, there is some uncertainty when it comes to types."

The long-term focus for Lineaje is not on generative code snippets for DevSecOps teams to remediate issues in software -- though this is in the works -- but on assisting with the remediation of open source supply chain vulnerabilities, Hasan said.

"Our research data has revealed that [internal] organization developers cannot fix 90% of the issues in [open source] dependencies without modifying the code so much that they create their own branch, which means that they now automatically own that code, whether they want to or not," he said.

For now, Lineaje BOMbots and generative AI chat can inform developers about whether vulnerable open source components are well maintained and how likely they are to fix a given issue. Eventually, Hasan said he sees BOMbots moving more directly into open source remediation.

"We are, effectively, with Lineaje AI, stepping in to solve the open source remediation problem," he said. "Just like they can create a Jira ticket, BOMbots will go create issues in well-maintained open source projects so that those [groups] can fix it."

Beth Pariseau, senior news writer at TechTarget, is an award-winning veteran of IT journalism. She can be reached at [email protected] or on Twitter @PariseauTT.

Dig Deeper on DevOps