AI for ESG: Benefits, challenges and the CIO's role

The hype surrounding artificial intelligence has extended to its use in ESG. But, as in other areas, AI is no panacea. Learn how the tech can both help -- and hurt -- sustainability.

More companies are turning to artificial intelligence in their quest for sustainability improvements without understanding the dual nature of the technology.

AI tools do hold promise for environmental, social and governance (ESG) efforts, but there are negative impacts to consider. Business, technology and sustainability leaders must understand AI's benefits and downsides to properly use the technology for a net-positive effect. In particular, the CIO has an important role in understanding the nature of using AI for ESG.

Risks of using AI for ESG

The risks surrounding AI use are plentiful, especially as it relates to ESG. So, before examining how AI might help support ESG initiatives, it's important to first examine some of the potential negative impacts.

Environmental risks

Artificial intelligence tools can be resource-intensive and have an outsized negative environmental impact. The compute power required to train and use many forms of AI systems -- especially generative AI (GenAI) -- is massive.

Training a single AI model can equate to almost five times the lifetime emissions of the average American car, according to a 2019 study "Energy and Policy Considerations for Deep Learning in NLP," conducted by researchers at the University of Massachusetts Amherst.

More AI use translates into more electricity and more water to dissipate the heat generated within those data centers, as well as more data center capacity, which, in turn, requires new construction.

"Across the board, AI use is really going to increase the computing we do today, and that will have significant environmental impact," said Ed Anderson, distinguished VP analyst at Gartner.

The increase in data storage and transfer associated with AI is one important source of that growing negative environmental impact.

"Data proliferation exponentially increases storage and network requirements, impacting energy and emissions," according to "The Promise and Challenge of AI in ESG Sustainability," a January 2024 report published by SustainableIT.org, a nonprofit advancing sustainability through technology leadership, located in Redwood City, Calif.

Social risks

The use of artificial intelligence poses risks related to employees, communities and other key stakeholders.

For example, not all countries or populations have the resources, staff or infrastructure to support AI development or use.

The tech is neither equally accessible nor equitable as some populations are shut out of its use and any potential benefits, according to the SustainableIT.org report.

AI can also amplify human bias.

People can -- and do -- introduce biases into AI systems through the algorithms they write, the data they use or a combination of the two, according to the SustainableIT.org report.

Machine learning-enabled systems can include a variety of prejudices, from those that confirm preexisting data trends to those that sample only selected members of a population.

For example, healthcare systems can make prejudiced decisions against certain races and genders. AI-based hiring tools can include the same type of biases, turning away women, minorities and those with disabilities.

As documented, real-world examples gain attention, more lawmakers are pushing for laws that prevent the likelihood of AI bias.

Governance risks

Artificial intelligence tools also pose governance and compliance risks, especially large language models (LLMs) that underpin GenAI. These include accidental breaches of copyright laws, concerns about data privacy and hallucinations.

For example, AI hallucinations, which are falsehoods LLMs present as truth and which users might act on, can lead to operational risks, safety risks and an array of bad decisions. Mitigating those risks and ensuring responsible AI require the use of human oversight, value alignment and other good framework essentials.

AI use also poses privacy and compliance risks. Many companies use publicly available resources to quickly train LLMs, even when they don't have permission from the copyright holders. This risk highlights the potential violation of national and international copyright laws.

As to data privacy, individuals training an LLM might inadvertently include private user or company data. Knowingly or unknowingly mishandling data might expose personally identifiable information or trade secrets to other users using the same tools.

Due to all these risks, organizations that use AI for ESG need to closely monitor their usage to comply with laws.

Ways AI can support ESG strategy

While AI poses many challenges, its capabilities have some potential to support a company's sustainability and ESG initiatives.

Potential environmental uses

AI has a potential role in monitoring complex business operations for environmental issues and offering potential solutions.

Organizations can use AI to analyze numerous areas -- energy use, supply chains, waste streams and other operational areas -- to identify opportunities for improvement, said Rick Pastore, research principal at SustainableIT.org. They can also use AI to become more efficient, thereby cutting back on the natural resources they use, and to uncover ways to manage or restore the environment, such as designing the optimal landscape for a region.

These kinds of AI uses could help companies lower their environmental footprints, he said. Furthermore, the technology could help accelerate efforts to meet the key objectives of the Paris Agreement, such as reaching net-zero goals or limiting temperature increases.

In all cases, leaders need to examine whether the resources required to use AI create a net-positive effect.

Important business benefits with AI

Potential social uses

Ethical AI has some potential to help address and tackle various organizational social challenges.

By spotting concerning patterns, such as lack of equitable access to opportunities and workplace health and safety issues, AI tools can support efforts to overcome inequalities. However, since the use of AI has historically supported bias, leaders need to examine and correct current practices. Using AI for areas such as reversing discriminatory loan or hiring practices or supporting health equity requires addressing current prejudices and defining what constitutes fairness.

The concept of ethics, fairness and responsibility as it relates to AI is just beginning to get wider attention and rests on complex conversations that have no easy answers.

Potential governance uses

The "G" in ESG covers a wide swath of issues under the governance umbrella -- everything from board composition to reliable and transparent reporting systems. AI's ability to rapidly audit problems is tailor-made to help support certain areas of governance.

"AI has great potential to improve overall governance at all levels [within an organization]," Anderson said.

For example, artificial intelligence tools can help organizations adhere to its rules, designing models to operate within defined parameters, as well as recognizing and sending alerts when activities run outside those parameters, he said. In fact, AI might be able to take corrective action when actions go outside those boundaries, thereby strengthening an organization's compliance to governance standards.

Furthermore, companies struggling to align tricky governance guidelines could potentially benefit from using AI.

Organizations could use artificial intelligence to understand and align the numerous, complex and sometimes contradictory rules and regulations that make up governance, Anderson said.

"In some organizations, governance is a vast discipline, with lots and lots of different policies, so it's hard to get their hands around all that," he said. "But AI can learn all that and not only recognize where particular behaviors or data are outside of the governance policies, but [the tech] could also identify where there are conflicting policies."

Potential uses across ESG

Efforts around ESG aren't typically about examining discrete areas in isolation, but rather seeing across ESG efforts to form a holistic picture. And that's a complex data issue.

AI technology has potential to help in that effort, collecting and analyzing data from an organization's ESG activities, including how the company uses artificial intelligence for ESG reporting, said Jody Elliott, National Grid's head of IT risk and sustainability. That greater visibility can help stakeholders get a more accurate picture of the organization's efforts and progress across the three areas of ESG.

The focus on transparent data reporting will continue to increase in response to growing regulatory requirements.

"As regulations become more common, there's going to be more emphasis on the transparency of the data being reported," Elliott said. "AI has an opportunity here to gather and interpret data and provide insights into that data in a more efficient way."

Early days of AI's ESG impacts

London-based multinational utility company National Grid is among the organizations exploring AI's use in ESG efforts.

The company is using AI to analyze weather, growth patterns and other data to determine the most environmentally friendly approach to controlling the trees, shrubs and other plants that live around its transmission lines and equipment, Elliott said.

Vegetation management is just one area where the use of AI can help strengthen National Grid's ESG activities, Elliott said. But AI could also easily negatively affect the organization's ESG efforts if company leadership isn't thoughtful and strategic in these efforts.

"Executives must make conscious decisions on where they deploy AI," Elliot said.

For example, the use of AI could increase the company's energy consumption and thereby drive up its greenhouse gas emissions, he said. Furthermore, AI could produce biased outputs, or it could expose the company's sensitive data.

Concern about AI's negative issues does exist.

For example, 65% of surveyed CEOs agreed that AI's social, ethical and criminal risks require attention, according to the July 2023 report, "Artificial intelligence ESG stakes," published by EY.

But whether caution slows the urge on many business leaders' part to move quickly to gain a competitive edge -- or avoid being left behind -- is another matter.

"Some corporate leaders have a desire to balance the good and the bad -- in other words, make sure that AI tools scale responsibly without having negative impacts on ESG and sustainability," Pastore said.

But many leaders are concerned that AI -- and, particularly, generative AI tools -- are being used in a variety of ways without effective governance, Pastore said.

What CIOs must understand about AI and ESG

CIOs have an important role to play in guiding sustainability efforts. That includes a company's use of AI to support sustainability efforts.

Many organizations aren't yet focused on using AI to boost their ESG postures, Anderson said. Instead, they're implementing AI to boost productivity, capture market share, improve customer service, create differentiating products and other revenue-related activities.

Using AI without attention to sustainability consequences is a recipe for disaster.

AI and emerging tech: The growing impact, challenges and responsibilities

The organizational focus on using AI strictly for productivity will hurt ESG efforts in the short term, Anderson said. However, AI should help ESG activities in the long term as organizations mature in their use of the technology.

The issue of bias is one example.

AI learns from people, and societies have a myriad of biases and prejudices that AI learns from, Anderson said.

"But, over time, we will be able to tune AI agents to behave how we want them to behave rather than as a mirror of what society is doing," he said.

But that will not happen overnight.

"It will just take some time to refine those models," he said.

CIOs and IT leaders can play a crucial role in driving the responsible and effective use of AI, especially when managing ESG data.

CIOs, as tech leaders, are well positioned to best understand the potential upsides and downsides of artificial intelligence in the ESG arena, Pastore said. They should take leading roles in devising how to use AI to bring improvements to ESG efforts, as well as in crafting guardrails and policies to minimize or even eliminate the downsides.

For companies without guardrails in place, resources are emerging to help guide AI implementations.

SustainableIT.org and other entities have published AI governance recommendations and responsible AI frameworks to help guide CIOs and their organizations, Pastore said.

Finding ways to minimize AI's environmental impact to create a net positive is another key element.

As just one example, CIOs can choose to use data centers in naturally cool locations to minimize cooling needs and use data centers powered by renewable energies, Elliott said.

That practice is just one of the principles of green computing and an illustration that, as companies rely increasingly on AI and other technology, taking a more holistic view of tech's positives and negatives will be critical.

Mary K. Pratt is an award-winning freelance journalist with a focus on covering enterprise IT and cybersecurity management.

Next Steps

ESG metrics: Tips and examples for measuring ESG performance

Key ESG and sustainability trends, ideas for companies

Ways organizations can address ESG's social factors

Dig Deeper on Sustainable IT

CIO
HRSoftware
ERP
Data Center
Mobile Computing
Close