Getty Images

Exploding interest in GenAI makes AI governance a necessity

With more employees of organizations now using artificial intelligence tools to inform business decisions, guidelines that ensure their safe and secure use are critical.

Enterprises need to heed a warning: Ignore AI governance at your own risk.

AI governance is essentially a set of policies and standards designed to mitigate the risks associated with using AI -- including generative AI -- to inform business decisions and automate processes previously carried out by human beings.

The reason it's now needed, that it cannot be ignored, is that enterprise interest in generative AI is exploding, which has led to more interest in traditional AI as well.

Historically, AI and machine learning models and applications were developed by and used mostly by small data science teams and other experts within organizations, never leaving their narrow purview. The tools were used to do forecasting, scenario planning and other types of predictive analytics, as well as automate certain repetitive processes also overseen by small groups of experts.

Now, however, sparked by OpenAI's November 2022 launch of ChatGPT, which represented significant improvement in large language model capabilities, enterprises want to extend their use of AI tools to more employees to drive more rapid growth. LLMs such as ChatGPT and Google Gemini enable true natural language processing that was previously impossible.

When combined with an enterprise's data, true NLP lets any employee with an internet connection and the requisite clearance query and analyze data in ways that previously required expert knowledge, including coding skills and data literacy training. In addition, when applied to enterprise data, generative AI technology can be trained to relieve experts of repetitive tasks, including coding, documentation and even data pipeline development, thus making developers and data scientists more efficient.

That combination of enabling more employees to make data-driven decisions and improving the efficiency of experts can result in significant growth.

If done properly.

If not, enterprises risk serious consequences, including decisions based on bad AI outputs, data leaks, legal noncompliance, customer dissatisfaction and lack of accountability, all of which could lead to financial losses.

Therefore, as interest in expanding the use of both generative AI and traditional AI transitions to more actual use of AI, and as more employees with less expertise get access to data and AI tools, enterprises need to ensure their AI tools are governed.

Some organizations have heeded the warning, according to Kevin Petrie, an analyst at BARC U.S.

"It is increasing," he said. "Security and governance are among the top concerns related to AI -- especially GenAI -- so the demand for AI governance continues to rise."

However, according to a survey Petrie conducted in 2023, only 25% of respondents said their organization has the proper AI governance controls to support AI and machine learning initiatives, while nearly half said their organization lacks the proper governance controls.

Diby Malakar, vice president of product management at data catalog specialist Alation, similarly said he has noticed a growing emphasis on AI governance.

SingleStore customers, like those of most data management and analytics vendors, have expressed interest in developing and deploying generative AI-driven tools. And as they build and implement conversational assistants, code translation tools and automated processes, they are concerned with ensuring proper use of the tools.

"In every customer call, they are saying they're doing more with GenAI, or at least thinking about it," Malakar said. "And one of the first few things they talk about is how to govern those assets -- assets as in AI models, feature stores and anything that could be used as input into the AI or the machine learning lifecycle."

Governance, however, is hard. Data governance has been a challenge for enterprises for years. Now, AI governance is taking its place alongside data governance as a requirement as well as a challenge.

A graphic displays the components of an AI governance framework.

Surging need

Data has long been a driver of business decisions.

For decades, however, data stewardship and analysis were the domain of small teams within organizations. Data was kept on premises, and even high-level executives had to request that IT personnel develop charts, graphs, reports, dashboards and other data assets before they could use them to inform decisions.

The process of requesting information, developing an asset to analyze the information and reaching a decision was lengthy, taking at a minimum a few days and -- depending on how many requests were made and the size of the data team -- even months. With data so controlled, there was little need for data governance.

Then, a bit less than 20 years ago, self-service analytics began to emerge. Vendors such as Tableau and Qlik developed visualization-based platforms that enabled business users to view and analyze data on their own, with proper training.

With data no longer the sole domain of experts -- and with trained experts no longer the only ones in control of their organization's data, and business users empowered to take action on their own -- organizations needed guidelines.

And with data in the hands of more people within an enterprise -- still only about a quarter of all employees, but more than before -- more oversight was needed. Otherwise, organizations risked noncompliance with government regulations and data breaches that could reveal sensitive information or cost a company its competitive advantage.

A similar circumstance is now taking place with AI -- albeit at a much faster rate, given all that has happened in less than two years -- that necessitates AI governance.

Just as data was once largely inaccessible, so was AI. And just as self-service analytics enabled more people within organizations to use data, necessitating data governance, generative AI is enabling more people within organizations to use AI, necessitating AI governance.

Donald Farmer, founder and principal of TreeHive Strategy, noted a similarity between the rising need for AI governance and events that necessitated data governance.

"That is a parallel," he said. "It's a reasonable one."

However, what is happening with AI is taking place much more quickly and on a much larger scale than what happened with self-service analytics, Farmer continued.

AI has the potential to completely alter how businesses conduct themselves, if properly governed. Farmer compared what AI can do for today's enterprises to what electricity did for businesses at the turn of the 20th century. At the time, widespread electrical use was dangerous. In response, organizations employed what were then known as CEOs -- chief electricity officers -- who oversaw the use of electricity and made sure safety was maintained.

"This is a very fundamental shift that we're just seeing the start of," Farmer said. "It's almost as fundamental as [electricity] -- everything you do is going to be affected by AI. The comparison with self-service analytics is accurate, but it's even more fundamental than that."

Alation's Malakar similarly noted parallels to be drawn between self-service analytics and the surging interest in AI. Both are rooted in less technical employees wanting to use technology to help make decisions and take action.

"What we see is that the business analyst who doesn't know coding wants less and less reliance on IT," Malakar said. "They want to be empowered to make decisions that are data-related."

First, that was enabled to some degree by self-service analytics. Now, it can be enabled to a much larger degree by generative AI. Every enterprise has questions such as how to reduce expenses, predict churn or implement the most effective marketing campaign. AI can provide the answers.

And with generative AI, it can provide the answers to virtually any employee.

"They're all AI/ML questions that were not being asked to the same degree 10 years ago," Malakar said. "So now all these things like privacy, security, explainability [and] accountability become very important -- a lot more important than it was in the world of pure data governance."

Elements of AI governance

At its core, AI governance is a lot like -- and connected to -- data governance.

Data governance frameworks are documented sets of guidelines to ensure the proper use of data, including policies related to data privacy, quality and security. In addition, data governance includes access controls that limit who can do what with their organization's data.

AI governance is linked to data governance and essentially builds on it, according to Petrie.

AI governance applies the same standards as data governance -- practices and policies designed to ensure the proper use of AI tools and accuracy of AI models and applications. But without good data governance as a foundation, AI governance loses significance.

Before AI models and applications can be effective and used to inform decisions and automate processes, they need to be trained using good, accurate data.

"[Data governance and AI governance] are inextricably linked and overlap quite a bit," Petrie said. "All the risks of AI have predecessors when it comes to data governance. You should view data governance as the essential foundation of AI governance."

Most enterprises do have data governance frameworks, he continued. But the same cannot be said for AI governance, as Petrie's 2023 survey demonstrated.

"That signals a real problem," he said.

The problem could be one that puts an organization at a competitive disadvantage -- that they're not ready to develop and deploy AI models and applications and reap their benefits, while competitors are doing so. Potentially more damaging, however, is if an enterprise is developing and deploying AI tools, but isn't properly managing how they're used. Rather than simply holding back growth, this could lead to negative consequences.

But AI governance is about more than just protection from potential problems. It's also about enabling confident use of AI tools, according to Farmer.

Good data governance frameworks strike a balance between putting limits on data use aimed to protect the enterprise from problems and supporting business users so that they can work with data without fearing that they're going to unintentionally put their organization in a precarious position.

Good AI governance frameworks need to strike that same balance so that someone asking a question of an AI assistant isn't afraid that the response they get and subsequent action they take will have a negative effect. Instead, that user needs to feel empowered.

People are beginning to come around to the idea that a well-governed system gives people more confidence in being able to apply it at scale. Good governance isn't a restricting function. It should be an enabling function. If it's well governed, you give people freedom.
Donald FarmerFounder and principal, TreeHive Strategy

"People are beginning to come around to the idea that a well-governed system gives people more confidence in being able to apply it at scale," Farmer said. "Good governance isn't a restricting function. It should be an enabling function. If it's well governed, you give people freedom."

Specific elements of a good framework for AI governance combine belief in the need for a system to manage the use of AI, guidelines that properly enforce the system and technological tools that assist in its execution, according to Petrie.

"AI governance is defining and enforcing policies, standards and rules to mitigate risks related to AI," he said. "To do that, you need people and process and technology."

The people aspect starts with executive support for an AI governance program led by someone such as a chief data officer or chief data and analytics officer. Also involved in developing and implementing the framework are those in supporting roles, such as data scientists, data architects and data engineers.

The process aspect is the AI governance framework itself -- the policies that address security, privacy, accuracy and accountability.

The technology is the infrastructure. Among other tools, it includes data and machine learning observability platforms that look at data quality and pipeline performance, catalogs that organize data and include governance capabilities, master data management capabilities to ensure consistency, and machine learning lifecycle management platforms to train and monitor models and applications.

Together, the elements of AI governance should lead to confidence, according to Malakar. They should lead to conviction in the outputs of AI models and applications so that end users can confidently act. They should also lead to faith that the organization is protected from misuse.

"AI governance is about being able to use AI applications and foster an environment of trust and integrity in the use of those AI applications," Malakar said. "It's best practices and accountability. Not every company will be good at each one of [the aspects of AI governance], but if they at least keep those principles in mind, it will lead to better leverage of AI."

Benefits and consequences

Confidence is perhaps the most significant outcome of a good AI governance framework, according to Farmer.

When the data used to feed and train AI models can be trusted, so can the outputs. And when the outputs can be trusted, users can take the actions that lead to growth. Similarly, when the processes automated and overseen by AI tools can be trusted, data scientists, engineers and other experts can use the time they've been given by being relieved of mundane tasks to take on new ones that likewise lead to growth.

"The benefit is confidence," Farmer said. "There's confidence to do more with it when you're well governed."

More tangibly, good AI governance leads to regulatory compliance and avoiding the financial and reputational harm that comes with regulatory violations, according to Petrie. Europe, in particular, has stringent regulations related to AI, and the U.S. is similarly expected to increase regulatory restrictions on exactly what AI developers and deployers can and cannot do.

Beyond regulatory compliance, good AI governance results in good customer relationships, Petrie continued. AI models and applications can provide enterprises with hyperpersonalized information about customers, efficiently enabling personalized shopping experiences and cross-selling opportunities that can increase profits.

"Those benefits are significant," Petrie said. "[But] if you're going to take something to customers -- GenAI in particular -- you better make sure you're getting it right, because you're playing with your revenue stream."

If enterprises get generative AI -- or traditional AI, for that matter -- wrong, i.e., if the governance framework controlling how AI models and applications are developed and deployed is poor, the consequences can be severe.

"All sorts of bad things can happen," Petrie said.

Some of them are the polar opposite of what can happen when an organization has a good AI governance framework. Instead of regulatory compliance, organizations can wind up with inquisitive regulators, and instead of strong customer relationships, they can wind up with poor ones.

But those are the end results.

First, what leads to inquisitive regulators and poor customer relationships, among other things, includes poor accuracy, biased outputs and mishandling of intellectual property.

"If those risks are not properly controlled and mitigated, you can wind up with ... regulatory penalties or costs related to compliance, angry or alienated customers, and you can end up with operational processes that hit bottlenecks because the intended efficiency benefits of AI will not be delivered," Petrie said.

Lack of data security, explainability and accountability are other results of poor AI governance, according to Malakar.

Without the combination of good data governance and AI governance, there can be security breaches as well as improperly prepared data -- personally identifiable information that hasn't been anonymized, for example -- that seeps into models and gets exposed. In addition, without good governance, it can be difficult to explain and fix bad outputs in a timely fashion or know whom to hold accountable.

"You don't want to build a model where it can't be trusted," Malakar said. "That's a risk to the entire culture of the company and can drive morale issues."

Ultimately, just as good AI governance engenders confidence, bad AI governance leads to a lack of confidence, according to Farmer.

If one competing company trusts its AI models and applications and another doesn't, the one that can act with confidence will reap the benefits, while the other will be stuck in place and miss out on the growth opportunities enabled by generative AI's significant potential.

"Given that the shift is so fundamental, not being well governed is really going to hold you back," Farmer said. "Governance is the difference between the ability to move swiftly and with confidence, and being held back and taking dangerous risks."

Eric Avidon is a senior news writer for TechTarget Editorial and a journalist with more than 25 years of experience. He covers analytics and data management.

Dig Deeper on Data management strategies

Business Analytics
SearchAWS
Content Management
SearchOracle
SearchSAP
Close