Wiz reveals DeepSeek database exposed API keys, chat history
Wiz expressed concern about security shortcomings with AI tools and services amid the rapid adoption and rising popularity of offerings like DeepSeek-R1.
An exposed DeepSeek database leaked highly sensitive information, including API keys and chat histories, that could let attackers gain full database control within the Chinese AI vendor's environment, according to new Wiz research.
DeepSeek gained increasing popularity following the release of its first-generation large language models, DeepSeek-R1-Zero and DeepSeek-R1, on Jan. 20. In a blog post published Wednesday, Wiz security researcher Gal Nagli said the models' rapid adoption led the security team to assess the vendor's security posture, and what they found was alarming.
Nagli revealed that "[w]ithin minutes" the team discovered "two unusual, open ports (8123 & 9000)" that led to a publicly accessible ClickHouse database linked to DeepSeek that exposed highly sensitive data. ClickHouse is an open source database management system developed by Russian tech giant Yandex that the blog post said is "designed for fast analytical queries on large datasets."
Nagli warned that exposure is dangerous because the system is used for real-time data processing, log storage and big data analytics.
"The exposure includes over a million lines of log streams containing chat history, secret keys, backend details, and other highly sensitive information. The Wiz Research team immediately and responsibly disclosed the issue to DeepSeek, which promptly secured the exposure," Nagli wrote in the blog post.
Nagli added that exposure could let an unauthenticated attacker gain full database control and potential privilege escalation within the DeepSeek environment. He also said the ability to access ClickHouse's log stream "posed a critical risk to DeepSeek's own security and for its end-users." In addition to retrieving sensitive logs and chat messages, Nagli warned that attackers could also potentially exfiltrate passwords and local files.
While Wednesday's research focused on DeepSeek, Nagli said it speaks to a broader problem regarding AI security, or a lack thereof.
"The rapid adoption of AI services without corresponding security is inherently risky. ... While much of the attention around AI security is focused on futuristic threats, the real dangers often come from basic risks -- like accidental external exposure of databases," he wrote. "These risks, which are fundamental to security, should remain a top priority for security teams."
Nagli added that security is becoming increasingly important as AI platforms are embedded into critical infrastructure organizations and businesses worldwide that hold highly sensitive data. He warned that organizations might often rush to adopt AI tools and services from startups like DeepSeek and overlook security.
"The world has never seen a piece of technology adopted at the pace of AI. Many AI companies have rapidly grown into critical infrastructure providers without the security frameworks that typically accompany such widespread adoptions," he wrote in the blog post, noting that it's important for security teams and AI engineers to work together to secure the technology.
DeepSeek has already gained attention from attackers, highlighting the increased urgency to secure the vendor's infrastructure and services. Earlier this week, DeepSeek disclosed that "large-scale malicious attacks" had disrupted the vendor's services, including user registration. However, DeepSeek did not provide additional information as attack details remain vague.
Arielle Waldman is a news writer for Informa TechTarget covering enterprise security.