KOHb - Getty Images

Tip

How to secure AI infrastructure: Best practices

AI tools are creating an even greater attack surface for malicious hackers to penetrate. But there are steps you can take to ensure your organization's AI foundation remains safe.

AI and generative AI represent great opportunities for enterprise innovation, but as these tools become more prevalent, their attack surfaces attract malicious hackers probing potential weaknesses. The same capabilities that enable AI to transform industries also make it a lucrative target for malicious actors.

Let's examine why constructing a secure AI infrastructure is so important and then jump into key security best practices to help keep AI safe.

Top AI infrastructure security risks

Among the risks companies face with their AI systems are the following:

  • Broadened attack surface. AI systems often rely on complex, distributed architectures involving cloud services, APIs and third-party integrations, all of which can be exploited.
  • Injection attacks. Threat actors manipulate training data or prompt inputs to alter AI behavior, leading to false predictions, biased outputs or malicious outcomes.
  • Data theft and leakage. AI systems process vast amounts of sensitive data; unsecured pipelines can result in breaches or misuse.
  • Model theft. Threat actors can reverse-engineer models or extract intellectual property through adversarial methods.

Addressing these risks requires comprehensive and proactive strategies tailored to AI infrastructure.

How to improve the security of AI environments

While AI applications show amazing promise, they also expose major security flaws. Recent reports highlighting DeepSeek's security vulnerabilities only scratch at the surface; most generative AI (GenAI) systems exhibit similar weaknesses. To properly secure AI infrastructure, enterprises should follow these best practices:

  • Implement zero trust.
  • Secure the data lifecycle.
  • Harden AI models.
  • Monitor AI-specific threats.
  • Secure the supply chain.
  • Maintain strong API security.
  • Ensure continuous compliance.

Implement zero trust

Zero trust is a foundational approach to secure AI infrastructure. This framework operates on the principle of "never trust, always verify," ensuring all users and devices accessing resources are authenticated and authorized. Zero-trust microsegmentation minimizes lateral movement within the network, while other zero-trust processes enable companies to monitor networks and flag any unauthorized login attempts to detect anomalies.

Secure the data lifecycle

AI systems are only as secure as the data they ingest, process and output. Key AI data security actions include the following:

  • Encryption. Encrypt data at rest, in transit and during processing using advanced encryption standards. Today, this means quantum-safe encryption. It's true that current quantum computers can't break existing encryption schemes, but that won't necessarily be the case in the next few years.
  • Ensure data integrity. Use hashing techniques and digital signatures to detect tampering.
  • Mandate access control. Apply strict role-based access control to limit exposure to sensitive data sets.
  • Minimize data. Reduce the amount of data collected and stored to minimize potential damage from breaches.

Harden AI models

Take the following steps to protect the integrity and confidentiality of AI models:

  • Adversarial training. Incorporate adversarial examples during model training to improve resilience against manipulation. Do this at least quarterly. The best practice is to conduct after-action reviews upon completion of training as well as increase the sophistication of future threat training. By doing this continuously, organizations can build dynamic, adaptive security teams.
  • Model encryption. Encrypt trained models to prevent theft or unauthorized use. Ensure all future encryption is quantum-safe to prevent the emerging possibility of encryption breaking with quantum computing.
  • Runtime protections. Use technologies such as secure enclaves -- for example, Intel Software Guard Extensions -- to protect models during inference.
  • Watermarking. Embed unique, hard-to-detect identifiers in models to trace and identify unauthorized usage.

Monitor AI-specific threats

Traditional monitoring tools might not capture AI-specific threats. Invest in specialized monitoring that can detect the following:

  • Data poisoning. Suspicious patterns or anomalies in training data that could indicate tampering. Recent studies have found this to be a significant and currently exploitable AI vulnerability. DeepSeek recently failed 100% of HarmBench attacks; other AI models did not fare significantly better.
  • Model drift. Unexpected deviations in model behavior that might result from adversarial attacks or degraded performance.
  • Unauthorized API access. Unusual API calls or payloads indicative of exploitation attempts.

Several companies, including IBM, SentinelOne, Glasswall and Wiz, offer tools and services designed to detect and mitigate AI-specific threats.

Secure the supply chain

AI infrastructure often depends on third-party components, from open-source libraries to cloud-based APIs. Best practices to secure the AI supply chain include the following:

  • Dependency scanning. Regularly scan and patch vulnerabilities in third-party libraries. This has been overlooked in the past, where libraries were used for many years, only to find major vulnerabilities, such as those found within Log4j.
  • Vendor risk assessment. Evaluate the security posture of third-party providers and enforce stringent service-level agreements. Monitor continuously.
  • Provenance tracking. Maintain records of data sets, models and tools used throughout the AI lifecycle.

Maintain strong API security

APIs underpin AI systems, enabling data flow and external integrations. To help secure AI infrastructure, use API gateways to authenticate, rate-limit and monitor. In addition, implement OAuth 2.0 and TLS for secure communications. Finally, regularly test APIs for vulnerabilities, such as broken authentication or improper input validation.

Ensure continuous compliance

AI infrastructure often combs through and relies on sensitive data subject to regulatory requirements, such as GDPR, CCPA and HIPAA. Do the following to automate compliance processes:

  • Audit. Continuously audit AI systems to ensure policies are followed.
  • Report. Generate detailed reports for regulatory bodies.
  • Close gaps. Proactively identify gaps and implement corrective measures.

Keep in mind that compliance is necessary, but the process in and of itself is insufficient in helping companies protect their AI infrastructure.

As AI and GenAI continue to proliferate, security is a key concern. Use a multilayered approach to protect data and models and to secure APIs and supply chains. Implement best practices and deploy advanced security technologies. These steps will help CISOs and security teams protect their AI infrastructure against evolving threats. The time to act is now.

Jerald Murphy is senior vice president of research and consulting with Nemertes Research. With more than three decades of technology experience, Murphy has worked on a range of technology topics, including neural networking research, integrated circuit design, computer programming and global data center design. He was also the CEO of a managed services company.

Dig Deeper on Threats and vulnerabilities