kras99 - stock.adobe.com

Orca: AI services, models falling short on security

New research from Orca Security shows that AI services and models in cloud contain a number of risks and security shortcomings that could be exploited by threat actors.

The pace of AI development continues to accelerate, but many organizations are failing to apply basic security measures to their models and tools, according to new research from Orca Security.

The cloud security vendor published the "2024 State of AI Security Report" on Wednesday that detailed alarming risks and security shortcomings in AI models and tools. Orca researchers compiled the report by analyzing data from cloud assets on AWS, Azure, Google Cloud, Oracle Cloud and Alibaba Cloud.

The report found that although AI usage has surged among organizations, many are not deploying the tools securely, which is concerning. For example, Orca warned that organizations struggle to disable risky default settings that could allow attackers to gain root access, deploy packages with vulnerabilities that threat actors could exploit or unknowingly expose sensitive code.

This is the latest report highlighting ongoing security risks with the rapid adoption of AI. Last month, Veracode also warned that developers are putting security second when it comes to using AI to write code. Now, Orca has shed light on how the problems continue to grow within enterprises.

While 56% of organizations deploy their own AI models for collaboration and automation, a significant number of the software packages they use contain at least one CVE.

"Most vulnerabilities are low to medium risk -- for now. [Sixty-two percent] of organizations have deployed an AI package with at least one CVE. Most of these vulnerabilities are medium risk with an average CVSS score of 6.9, and only 0.2% of the vulnerabilities have a public exploit (compared to the 2.5% average)," Orca wrote in the report.

Insecure configurations and controls

Orca found that Azure OpenAI was the AI service organization most frequently used to build custom applications, but there are concerns. The report stated that 27% of organizations did not configure Azure OpenAI accounts with private endpoints, which could allow attackers to "access, intercept, or manipulate data transmitted between cloud resources and AI services."

The report highlighted a significant problem with the default settings for Amazon SageMaker, a machine learning service that organizations use to develop and deploy AI models in the cloud. Disabling risky default settings in general is a massive problem organizations face when it comes to leveraging AI tools and platforms in business environments.

The default settings of AI services tend to favor development speed rather than security, which results in most organizations using insecure default settings.
Orca Security'2024 State of AI Security Report'

"The default settings of AI services tend to favor development speed rather than security, which results in most organizations using insecure default settings. For example, 45% of Amazon SageMaker buckets are using non randomized default bucket names, and 98% of organizations have not disabled the default root access for Amazon SageMaker notebook instances," the report said.

Orca warned that an attacker could use the root access to gain privileged access to perform any action on the asset. Another problem with Amazon SageMaker, which extends to all the cloud providers included in the report, is that organizations are not using self-managed encryption keys.

Another issue flagged in the report involved a lack of encryption protection. For example, 98% of organizations using Google Vertex hadn't enabled encryption at rest for their self-managed keys. While the report noted that some organizations may have encrypted their data through other means, it warned the risks are significant. "This leaves sensitive data exposed to attackers, increasing the chances that a bad actor can exfiltrate, delete, or alter the AI model," Orca wrote.

The report also highlighted security risks associated with AI platforms like OpenAI and Hugging Face. For example, Orca found that 20% of organizations using OpenAI have an exposed access key and 35% of companies have an exposed Hugging Face access key.

Wiz researchers also proved how vulnerable Hugging Face is in research presented during Black Hat USA 2024 last month. The researchers demonstrated how they were able to compromise the AI platform and gain access to sensitive data.

Orca Security listed AI packages organizations deployed that contained at least one vulnerability.
Vulnerabilities is just one issue Orca Security highlighted in a new report on AI security risks.

Check default settings

Orca co-founder and CEO Gil Geron spoke with TechTarget Editorial about the problems related to AI's rapid adoption and lack of security. "The roles and responsibilities around using these kinds of technologies are not set in stone or clear. That's why we're seeing a surge in usage of these tools, but risks are on the rise in terms of access, securing data and vulnerabilities," he said.

Geron added that it's important for security practitioners to recognize the risks, set policies and implement boundaries in order to keep pace with the rapid increase in AI adoption. He stressed that the security problem requires participation from both the engineer and security practitioner sides of an organization.

Geron also said the security challenges are not entirely new, though the tools and platforms are. Every technology starts off very open until the risks are mapped out, he said. Currently, the default settings are very permissive, which makes the tools and platforms easy to use, but the openness also creates security issues.

As of now, he said, it's difficult to say whether the root cause is due to organizations putting security second to deployment, or that technology companies need to do more to protect the tools, models and data sets.

"The fact that there isn't a defined line between what your responsibility is in using the technology and what the vendor responsibility is creates this notion, 'Oh, it's probably secure because it's provided by Google,'" Geron said. "But they can't control how you're using it, and they can't control whether you're training your models on internal data you shouldn't have exposed. They give you the technology, but how you use it is still your responsibility."

It's also unclear whether vendors changing default settings would even help. Geron said AI usage is still experimental, and providers usually wait for feedback from the market. "It makes it challenging in resetting or changing something that you don't know how it will be used," he said.

Geron urged organizations to check the default settings to ensure projects and tools are secure, and he recommended limiting permissions and access.

"And last but not least is pure hygiene of your network, like isolation and separation, which are all good practices for security, but are even more important with these kinds of services," he said.

Arielle Waldman is a news writer for TechTarget Editorial covering enterprise security.

Dig Deeper on Network security