
metamorworks - stock.adobe.com
Generative AI security best practices to mitigate risks
When tackling AI security issues, enterprises should minimize shadow IT risks, establish an AI governance council and train employees on the proper use of AI tools.
For the second consecutive year at Enterprise Connect, I moderated a panel discussion about generative AI security and compliance. This year's panel featured speakers from ChoiceTel, Google, Theta Lake and Zoom. Speakers brought a variety of perspectives as consultants, former chief information security officers and technology providers. This year, some key themes emerged.
Balance security, compliance and usability
Several speakers discussed the need for companies to take advantage of AI safely and prevent employees from bypassing IT to use their own tools. Over the last several years, U.S. companies -- especially in finance -- have been hit with hefty fines for using unsanctioned public messaging apps like Signal and WhatsApp. Signal, in particular, was in the news recently after U.S. government officials shared military plans on the app and mistakenly included a journalist in the group chat.
Companies that are proactive in ensuring access to apps and features -- in accordance with security and compliance needs -- are better able to realize the benefits of emerging technologies while also minimizing the risk of shadow IT app usage.
Centralize governance
Several speakers also discussed the benefits of having an AI governance council to establish policies and gain insights into use cases and risks. Governance councils are well positioned to evaluate various risk scenarios and the likelihood of occurrence. They can provide a means by which IT, business units or even individual users can apply for new app or feature approval.
An AI governance council can also apply industry frameworks such as those developed by NIST for AI governance and security. A global Metrigy study of 400 organizations found that 48.9% of companies have an AI usage governance strategy, and 39% plan to have one by the end of 2025.
Establish public tool policies
Metrigy's research shows that 85.3% of companies allow employees to use public AI tools such as ChatGPT, Dall-E, Grok or Perplexity. But just 60.5% have created guidelines for safe usage.
Panelists at Enterprise Connect recommended that companies enact the following policies:
- Develop training and policies to govern usage.
- Make employees aware of security and risk concerns.
- Monitor usage.
- Hold employees accountable for any misuse that puts their company at risk.
Ensure data security and quality
As companies increasingly use their own data -- coupled with public data to train models -- they must ensure they don't create data leakage concerns. AI teams must test output responses for accuracy and ensure users only have access to permissible data. Additionally, they must establish incident response procedures in the event of a data breach.
Final recommendations
When developing an AI security and compliance strategy, panelists recommended that enterprises focus on the following:
- As previously noted, establish an AI governance council to develop policies and enforce them.
- Conduct risk assessments regularly, especially as new AI capabilities like agentics enter the workplace.
- Follow security frameworks from NIST and the Cybersecurity and Infrastructure Security Agency to align with best practices for AI security and governance.
- Ensure compliance approaches can capture AI-generated content and use AI to improve the ability to detect breaches and policy violations.
- Train users on securing AI tools.
- Ensure the safe use of AI capabilities and eliminate any friction between end users and IT so that employees don't bypass company directives and adopt their own tools.
- Ensure alignment with identity and access management controls to minimize the risk of data leakage and loss.
Irwin Lazar is president and principal analyst at Metrigy, where he leads coverage on the digital workplace. His research focus includes unified communications, VoIP, video conferencing and team collaboration.