putilov_denis - stock.adobe.com

Tip

Assess and manage the risks of using AI for business

Legal ramifications, public image crises and misleading information are some of the disastrous consequences of incorrectly using AI. If you don't have a policy for managing AI risks, now is the time.

Artificial intelligence continues to change the world, with organizations integrating it into business processes and applications. However, for all of AI's benefits, it also introduces new risks.

Many of the problems that AI creates are categorized as hallucinations. Hallucinations are pieces of inaccurate or misleading information AI generates and presents as factual. These can lead to several types of disasters, including hits to the company's public image, financial losses, technical difficulties and even legal issues.

To manage AI hallucinations, organizations must be able to show a chain of logic between data sources and AI results. However, hallucinations are not the only issues plaguing AI use. Privacy, transparency and data management all present opportunities for risk when AI is involved.

There are several AI risks that organizations should monitor. Below are the primary risk categories that businesses should consider. To perform an AI risk assessment, examine the different types of risk listed below and determine which of these the business faces. Once the risk types are recognizable, review the suggested mitigation approaches.

AI evolves constantly, so the sooner an organization understands AI risk, the sooner it can begin to address and amend those concerns.

Economic risk

AI currently affects economics immensely, both at the small-scale business level and the macro level of worldwide economies. Economic AI risks include worker perceptions and corporate power abuses.

Perceived job threats

While AI implementations can create new jobs, workers today demonstrate a healthy fear of AI regarding job loss. Even if the reality doesn't necessarily match the public perception of the threat AI presents to workers, this perception still has tangible results, including:

  • Workforce demoralization.
  • Potential productivity loss.
  • Talent migration away from AI-affected industries.
  • Misguided policy and regulatory responses.
  • Misguided education responses.
  • Investor hesitancy.

So, how can organizations manage this perception? Begin with clear communication about how AI affects human skills, job security and role changes. Supplement that with upskilling opportunities and new career paths for existing employees. Employee participation and feedback are crucial. Finally, demonstrate responsible AI use companywide.

Concentration of corporate power

Information control is central to economic and social power, and AI is an information-driven industry. It's critical to control the data sets from which AI learns. Many large organizations such as Google, Meta, Microsoft and Amazon already dominate access to the AI infrastructure, innovation and data sources. The risk of concentrated corporate power leads directly to the economic problems around monopolies, including reduced innovation, consumer exploitation and wealth consolidation.

Mitigating this potential power concentration includes antitrust legislation and enforcement and legal protection for workers and consumers. Another intriguing option is a public AI infrastructure that remains independent of corporate control.

Privacy risk

One of today's biggest AI concerns is personal privacy. Careful and thoughtful use of AI is essential to managing this concern.

Privacy violations

AI learns from vast data sets. It can be difficult to control or even know if personal information exists in these data sets. However, the concern is not just about specific information such as health records. AI can infer health status by analyzing data from disparate areas. This inference could violate privacy standards or the results could be incorrect. The same issue applies to political views, personal habits, economic status and other individual characteristics that AI data sets might include.

Privacy risk mitigation should include legal protections and ethical behavior that organizations can enforce. In addition, reliable data sources -- unpolluted by private information -- are crucial.

Loss of personal freedom

People are under greater surveillance than ever, with cameras in intersections, buildings and walkways. Predictive policing algorithms, facial recognition and behavior analytics bolster this monitoring.

One concern is the disproportional targeting of specific communities. This surveillance affects social behavior, leading to concerns around personal freedoms.

Potential threats to personal freedom include surveillance, policing and pattern-matching behaviors that may threaten fundamental human freedoms, such as speech, assembly, press and other avenues of social criticism. AI can be used to generate, propagate and enforce these types of environments.

To mitigate these issues, organizations must listen to their employees' and customers' concerns regarding surveillance and loss of personal freedom. They must also thoroughly vet and understand the AI models the business uses and be transparent about what it will use AI-collected data for.

Public relations risk

Using AI in the enterprise can affect how others view the organization. AI is a relative newcomer to the business world, so people have varying levels of trust (and distrust) about it. The accuracy and ethics of AI fuel these perceptions.

AI mistakes

If a product suffers from an AI-related technical error, misprint or other error, it can have a tangible effect on an organization's public image. This is especially true with the public's current caution around AI integration.

Mitigate this risk by using more traditional means of verifying AI information, including human review of all material before release.

AI ethics

In addition to AI accuracy, organizations must consider how the ethical use of AI contributes to public perception. Just the perception that a company doesn't use AI ethically or legally could bring negative attention to products and services.

The best bet for reducing this risk is publishing and following an ethical use policy for AI. Transparency is one of the strongest defenses against AI risk.

Legal and criminal risk

The court of public opinion is one concern, but the legal court system is another. Businesses must be aware of and prevent AI-related legal and industry compliance violations.

Intellectual property and copyright security

AI systems are trained on massive data sets. If those data sets include copyrighted or confidential material, that information can be included in public results, leading to potential legal issues. Another related issue is ownership of AI-generated output.

Carefully monitor and control the data sets AI uses for learning. Providers of large language models must offer transparency around data sources to help manage this risk.

Legal and regulatory compliance

Carefully monitor legal and regulatory compliance with AI use, especially around data privacy concerns, intellectual property infringement, transparency and incorrect or outdated compliance.

Outdated compliance issues are a serious concern at this stage of AI integration. New laws directly affecting AI continue to emerge. In addition, laws governing data management and personal privacy are expanding, so businesses that use AI must carefully manage that kind of information.

Mitigate this risk by dedicating resources to analyzing and monitoring legal and regulatory changes.

Misinformation risk

As AI improves, it becomes a more effective tool for dispensing information. Unfortunately, some of that is deliberate misinformation.

AI misinformation and manipulation

Intentional AI abuse can result in misinformation and manipulation of public opinion, including deliberately generating false content. This is also known as data poisoning, and it can include text, images, video and other media. This misinformation undermines the public's trust in government, organizations or other public entities that might use AI. These groups risk damaging their reputation and integrity, leading to negative economic consequences.

Mitigation includes fact-checking partnerships with other organizations, human oversight of information and development of trusted information sources. Ironically enough, organizations might also avoid this risk by using AI to detect false information patterns.

Transparency risk

Businesses relying heavily on AI must be transparent about AI decision-making and information-gathering and adhere to the legal and ethical issues surrounding AI-driven decisions.

AI decision-making and information-gathering

AI decision-making potentially leads to trust, reliability and consistency issues. It's not always clear how AI arrives at a particular decision or suggested course of action, making it difficult for human managers or administrators to apply suggested changes without comprehending them. Organizations using AI must account for legal or ethical issues around these AI-driven decisions.

Similar hesitancy surrounds information-gathering activities. Few AI engines cite sources or provide a clear view of where data originates, making AI "facts" challenging to verify.

Make stakeholders aware of these concerns rather than camouflaging them. Later exposure of hidden flaws or reliability becomes a PR problem with serious consequences for an organization's reputation and market share.

Administration risk (IT)

System administration and AI integration carry risks, including security, data privacy and control, and technical mistakes. An overreliance on AI that leads to skill degradation exacerbates these risks.

No human in the loop

Companies that rely on AI over a skilled IT staff will not have capable employees available when a crisis exceeds AI's capabilities. While day-to-day operations might benefit from some AI integration, processes such as design, troubleshooting and incident response often require a more intuitive approach that only an experienced and well-trained IT staff provides.

In addition to a company culture of overreliance on AI, IT professionals cannot allow their skills to languish by letting AI manage all aspects of their jobs.

Technical mistakes

Unreviewed AI configuration management can lead to technical mistakes if companies remove human oversight from the loop. Potential issues can include automation errors, misallocated resources and incomplete reporting.

Avoiding excessive AI autonomy can reduce this risk. Ensure workflows include human review of AI processes. Maintain an experienced IT staff that can configure, manage and troubleshoot systems without AI involvement. Businesses can also provide upskilling and career growth opportunities to retain talent.

Data management risk

Data security is paramount to any business's success. AI might supplement security configurations and automate various processes but be aware of potential pitfalls around data management.

Data security

Managing business data, such as customer records, proprietary information or user home directories, is critical to any sys admin's job. AI can supplement this role by classifying information based on content analysis before applying access controls to prevent data breaches, information exfiltration or other access violations. Classification mistakes put this data at risk, potentially exposing proprietary or private information to unauthorized users.

In addition, unauthorized access can lead to confidential information mixing into the AI's data sources. This is difficult for organizations to detect and correct.

Relying on automation and AI for data security is viable, but risk controls must include periodic auditing, human oversight and comprehensive penetration testing. These approaches help reduce security misconfigurations and misidentified information. As with other AI mitigations, the ultimate control resides with human decision-makers.

Other risks to monitor

The list of potential AI risks is extensive and continues to change. Some risks that might be less common now could become greater issues in the future. When conducting an AI risk assessment, consider if any of the following areas are relevant to your organization:

  • AI weaponization: Intentional and malicious use of AI for military, social or economic weaponization designed to harm people.
  • Uncontrolled self-aware AI: Self-aware AI is a constant discussion point, with potential danger to people as the primary concern.
  • Environmental damage: AI-guided policies can result in environmental or climate damage, especially when processes do not account for environmental concerns.
  • Medical harm: Medical and surgical procedures could lead to harm if AI data sources and capabilities are not fully managed and understood.

Next Steps

How the rise in AI impacts data centers and the environment

Dig Deeper on Disaster recovery planning and management