Pros and cons of AI-generated content Generative AI ethics: 8 biggest concerns and risks
X
Tip

8 generative AI challenges that businesses should consider

IT managers planning to adopt GenAI can expect to encounter some hurdles, including cost optimization, workforce effects, cybersecurity concerns and energy use.

The mainstreaming of generative AI offers here-and-now capabilities, the promise of future advancements -- and more than a few pitfalls.

This form of artificial intelligence technology hit its stride in 2022 with the release of OpenAI's ChatGPT and Dall-E 2. Those tools, and successors such as Anthropic's Claude, Google Gemini and Microsoft Copilot, make high-quality, AI-generated content an everyday reality. Each of those products is built on a language model, which is a type of machine learning model honed on massive amounts of training data.

Users can now tap AI models for a range of content creation chores: Text, image, video, audio and software code are all in the mix. But the potential benefits of the technology come at some cost. Business leaders should consider these eight generative AI (GenAI) challenges.

1. Controlling costs and obtaining ROI

Organizations rolling out GenAI initially pursued limited-scale, proof-of-concept experiments. The price tag wasn't the top concern in the early days of testing use cases. But cost became a consideration as IT managers started expanding GenAI pilots into more widely deployed production systems. The challenge was no longer managing early adoption but obtaining ROI.

Early on in the evolution of GenAI, industry executives considered training large language models (LLMs) a top expense item. GenAI models might contain billions or even trillions of parameters, making them a complex undertaking for a typical business. Most businesses, however, are using models from the likes of Anthropic, Google, OpenAI or Microsoft, rather than building their own. Even so, enterprises still pay for access models through APIs and might incur the expense of customizing them.

But there are plenty of costs beyond the model itself. McKinsey & Co. research found GenAI models account for only about 15% of a typical project's cost. Other expenses include cloud migration, data preparation, change management and business process redesign. While watching costs, businesses see revenue opportunities to generate ROI.

2. Reshaping the workforce

Generative AI is restructuring how work gets done in many fields, which raises job-loss concerns. The tech sector layoffs prevalent in recent years have been attributed to a number of factors, including rising interest rates in 2022 and 2023 and over-hiring during the COVID-19 pandemic. The role of GenAI adoption in the cutbacks is somewhat ambiguous, but, at the very least, it seems to have affected job listings.

Industry observers, while not discounting the potential for layoffs, believe the main influence of GenAI will be in changing employees' jobs rather than eliminating them. Arun Chandrasekaran, vice president, analyst and part of the AI strategy team at Gartner, said he believes GenAI is replacing some of the tasks workers perform. But the number of tasks the technology automates will vary by position, he added, while posing these questions: "How do we retrain people who have a high degree of what they do automated by AI? What skills do they need to acquire? What roles do they need to gravitate toward?"

Pablo Alejo, founder of business and technology consultancy Copilot, said GenAI is creating new jobs. "I don't believe it is a situation where it's doom and gloom," he surmised. "I believe, if anything, it reveals opportunities for something different to happen."

That's proven to be the case for Alejo, a former Accenture and West Monroe executive who launched his own consultancy in 2024. He uses GenAI to write applications and conduct customer analysis for his clients.

Graphic showing generative AI's key business challenges
Generative AI technology will engage enterprises across their core aspects.

3. Dealing with security and data privacy concerns

AI models lower the cost of content creation. That helps businesses but also helps threat actors who can more easily modify existing content to create deepfakes. Digitally altered media can closely mimic the original and be hyperpersonalized. "This includes everything from voice and video impersonation to fake art, as well as targeted attacks," Chandrasekaran said.

Data loss is another security concern. Employees can avail themselves of various GenAI tools and inadvertently release sensitive company information through their prompts. This recent vulnerability will elevate some established cybersecurity tools, such as data loss prevention, said Bill Bragg, CIO at predictive AI and GenAI platform provider SymphonyAI. "Data loss prevention," he noted, "is an old technology, but it's super important."

Other effects of GenAI on cybersecurity include more sophisticated phishing attacks. Threat actors use GenAI tools to create convincing messages to lure unsuspecting users. They also can employ the technology to write malicious code, analyze software to plot attacks and hone their hacking skills.

4. Keeping tabs on GenAI hallucinations and algorithmic bias

GenAI systems can produce incorrect or misleading information when responding to user inquiries. Tools are said to hallucinate when they invent "facts," exposing companies to legal and business risks. In general, hallucinations damage the credibility of AI. "Distrust of AI is a challenge that AI application vendors are facing today," Chandrasekaran said.

AI providers and users also battle algorithmic bias, another source of legal risk. When trained on faulty, incomplete or unrepresentative data sets, GenAI models will produce results that are systemically prejudiced. Unchecked, AI bias spreads through the systems and influences decision-makers relying on the results, potentially leading to discrimination.

"These models are massively pre-trained on internet data," Chandrasekaran said. "They reflect the biases of the internet."

5. Providing coordination and oversight

Newer technologies often compel organizations to launch centers of excellence (CoEs) to focus on effective adoption, and that has been the case with GenAI. CoEs and similar groups such as communities of practice can provide a good way to scale AI across an organization, Chandrasekaran said. But they can also hinder deployment.

"We have seen many large enterprises go down the path," he explained. "Some have succeeded, and others have not. Often, they don't succeed because these communities of practice or centers of excellence start thinking of themselves as 'my way or no way.'"

Groups responsible for coordinating GenAI can become insensitive to the needs of creators, builders and other stakeholders within an enterprise. Chandrasekaran said he advises organizations to act as a forest ranger rather than a guard -- that is, give GenAI adopters a map, suggest which trails to take and offer support if they encounter problems. He described the forest ranger approach as: "You are doing something new. How can I help you in a meaningful way?"

But the guard, or gatekeeper, approach can encourage shadow AI, which operates outside of corporate guardrails. Alejo also cited the tendency of some CoEs to become overly restrictive and distrustful of employees. An enterprise's core values, he added, will foster the proper use of GenAI. "An organization that cares about privacy," Alejo said, "will create AI that cares about privacy."

6. Tackling legacy systems and technical debt

Incorporating GenAI into older technology environments could raise additional issues for businesses. The feasibility of legacy integration could prove especially vexing in the case of aging client-server and mainframe systems. But even the current crop of enterprise applications presents challenges, and businesses struggle with data quality and classification. "There's no simple way to integrate data and applications into an AI workflow," Chandrasekaran said. "They have technical debt in the data estate."

There's no simple way to integrate data and applications into an AI workflow.
Arun ChandrasekaranVice president and analyst at Gartner

Ironically, GenAI is quickly joining legacy systems and problematic data in the technical debt category. "As generative AI's adoption continues to scale, companies need to actively manage their technical debt to prevent it from ballooning," according to an Accenture report published in October 2024. The IT consultancy surveyed 1,500 companies, and 41% of respondents cited AI as the top contributor to their technical debt.

The GenAI expansion, Bragg said, poses a hard question for technology managers: "Can you innovate and eliminate debt at the same time?"

7. Managing energy demand

GenAI workloads contribute to increasing electricity usage in data centers, making the availability and cost of energy an enterprise IT issue.

Training LLMs and prompting them to respond to user queries are compute-intensive activities that require ample amounts of power. In addition, heat-generating AI data centers need sophisticated cooling systems that consume even more energy. To address AI-driven demand, Fortune 500 companies will shift $500 million of their energy Opex to microgrids through 2027, according to market researcher Gartner. Microgrids provide independent energy systems to meet the needs of a company or group of companies.

AWS, Google and Microsoft, meanwhile, are turning to nuclear power to satisfy their energy requirements. In addition, the Stargate Project, which plans to invest $500 billion to build AI data centers, is also exploring energy sources. President Donald Trump unveiled the initiative in January 2025, with the financial backing of investment firm MGX, OpenAI, Oracle and SoftBank. At the unveiling, Trump said his administration would facilitate the Stargate data centers' electricity production -- potentially "at their own plants," according to published reports. OpenAI, meanwhile, seeks to partner with "renewable energy and sustainability firms," according to the GenAI company's website.

Amid the varied power strategies, some CIOs plan to evaluate an AI provider's energy-efficiency approaches as a vendor-selection criterion. Indeed, energy demands rank among their top challenges for 2025.

8. Facing uncertainty: How intelligent is AI going to get?

Businesses planning GenAI rollouts must be ready to deal with uncertainty, given the still-evolving nature of the technology. Will GenAI continue to make rapid progress or will it suddenly plateau in an AI winter?

The challenge for adopters is not knowing precisely where they stand on the GenAI roadmap, Alejo said. "Are we still at the beginning or at a ceiling?" he asked. Alejo believes GenAI is still at the beginning, but he said businesses must consider the possibility of hitting a barrier and what that means for their ability to innovate.

Chandrasekaran also pointed to AI uncertainty as a challenge, noting the ongoing debate on whether artificial general intelligence (AGI) is happening soon or still years away. He alluded to the philosophical debate about whether AGI is even the right goal to pursue. Might it be better to evolve AI in areas in which it already excels?

Technology adopters will eventually learn whether AI systems develop more intelligence and reasoning capability going forward. In the meantime, it's anybody's guess.

Editor's note: This article was updated in 2025 to reflect the latest GenAI challenges confronting businesses today.

John Moore is a writer for Informa TechTarget covering the CIO role, economic trends and the IT services industry.

Next Steps

How to prevent deepfakes in the era of generative AI

Generative AI ethics: Biggest concerns and risks

AI existential risk: Is AI a threat to humanity?

How to detect AI-generated content

Gemini vs. ChatGPT: What's the difference?

Dig Deeper on AI business strategies