Natali_Mis/istock via Getty Imag

Fix for Azure Health Bot vulnerabilities prevents exploitation

Researchers disclosed two Azure Health Bot vulnerabilities to Microsoft for which fixes were deployed before the flaws could be exploited.

Two Azure Health Bot vulnerabilities could have allowed for privilege escalation via a server-side request forgery, researchers from cybersecurity company Tenable found. Microsoft, which operates Azure Health Bot, stated that mitigations for the vulnerabilities have been applied to all affected services and no customer action is required. However, the discovery demonstrated how vulnerabilities in chatbots can be exploited to affect patient data privacy.

Azure Health Bot allows developers to "build and deploy AI-powered, compliant, conversational healthcare experiences at scale," Microsoft states. Developers can use the tool to create AI-powered virtual health assistants with customized healthcare capabilities, such as medical database and triage protocols.

Healthcare organizations can integrate with EMR data using FHIR data connections. Microsoft also states that the service "ensures alignment with industry compliance requirements and is privacy protected to HIPAA standards."

Tenable researchers conducted an audit of Azure Health Bot and took an interest in the Data Connections feature, which allows bots to retrieve information from external data sources.

Despite the presence of protections meant to prevent unauthorized access to internal APIs, researchers were able to access the service's internal metadata service (IMDS) and gain access tokens allowing the management of cross-tenant resources.

"The Data Connector utilities used within Azure Health Bot's Scenario Editor improperly handled redirect responses from user-supplied endpoints," Tenable stated in its advisory for the first vulnerability, labeled as CVE-2024-38109.

"This allowed researchers access to Azure's IMDS, which gave management access to the internal Microsoft subscription ID governing resources of customers utilizing the Health Bot service."

The second vulnerability, discovered in July 2024, involved the validator mechanism for FHIR data connection endpoints used within Azure Health Bot, which also improperly handled redirect responses stemming from user-supplied endpoints.

With this vulnerability, researchers were able to access Azure's WireServer and components of the internal AKS infrastructure.

Tenable reported its findings regarding each vulnerability to the Microsoft Security Response Center (MSRC) in June and July 2024, respectively. MSRC introduced fixes to the affected environments within a week of discovery, and no reports indicated that the vulnerabilities had been exploited.

Microsoft's fix was to reject redirect status codes entirely for data connection endpoints, eliminating the attack vector in the process.

"The vulnerabilities discussed in this post involve flaws in the underlying architecture of the AI chatbot service rather than the AI models themselves," Tenable added.

Researchers highlighted the importance of maintaining strong web application and cloud security controls as AI-powered services continue to gain popularity. The vulnerability disclosures and quick fix by Microsoft also underscored the value of collaboration within the security community when it comes to discovering, disclosing and patching vulnerabilities.

Jill McKeon has covered healthcare cybersecurity and privacy news since 2021.

Dig Deeper on Health data threats

xtelligent Health IT and EHR
xtelligent Healthtech Analytics
Close