sdecoret - stock.adobe.com

How ChatGPT and generative AI will affect IT operations

As generative AI programs improve, they raise questions for many engineering disciplines about the future of work -- and IT operations is no exception.

Generative AI has dominated news cycles over the last six months, as technical advances in the field seem poised to upend aspects of day-to-day work, including that of IT professionals.

Discussions about the implications of generative AI for technology and broader society first reached a fever pitch in November 2022, when ChatGPT-3 was released by OpenAI. This month OpenAI reignited the debate with an API that could alter the nature of corporate and consumer applications, and the release of GPT-4, an updated large language model (LLM) that's intelligent enough to pass the SAT or bar exam.

Generative AI can be used to produce original content in response to queries. But it can also be used to code or operate as a virtual assistant. Other generative AI examples include GitHub's Copilot, based on OpenAI's Codex, which can create software application code based on natural language prompts; Salesforce's CodeT5; and Tabnine's code completion tool. This week generative AI also surfaced in new AI assistants for Microsoft Azure and Office 365 as well as updates to Google Cloud and Google Workspace.

Even before GPT-3 burst onto the scene, generative AI had made its way into tools familiar to IT ops pros, such as Red Hat's Ansible infrastructure-as-code software. IBM and Red Hat launched Project Wisdom in October with the goal of training a generative AI model to create Ansible playbooks.

"By simply typing in a sentence, we intend to make it easier to create automation content, find automation content, improve automation content and, perhaps most importantly, explain what a playbook does without running it," wrote Ashesh Badani, senior vice president and head of products at Red Hat, in an October blog post.

Image depicting ASBPE 2024 national bronze award recognition

Already, generative AI's ability to take on coding tasks -- once the sole province of human developers -- has prompted anxiety among software engineers about whether such programs will eventually replace them. While complete replacement is unlikely, generative AI could change the nature of work for programmers significantly, shifting their expertise from directly instructing machines via coding languages to what's been dubbed prompt engineering.

It's also clear that some software engineering tasks, such as test generation, will soon be taken over by AI, according to one analyst.

"[Generative AI] can generate not just application code but also process automation, where it also generates functional tests," said Diego Lo Giudice, an analyst at Forrester Research. "Not just unit tests -- functional tests."

Meanwhile, modern infrastructure managed by IT ops pros in roles such as site reliability engineer (SRE) is largely code driven. In the rapidly growing field of platform engineering, IT pros act as a conduit between application developers and complex back-end infrastructure, often creating infrastructure-as-code templates to ensure applications are deployed smoothly and according to enterprise policies in test environments and production.

Generative AI timeline from the mechanical brain in 1932 to LMMs in 2023.
Generative AI has a long history, but major breakthroughs in the last year have prompted debate about its role in the future of white-collar work.

Observability, chaos engineering ripe areas for generative AI

Some specific IT ops skills and workflows could become the domain of generative AI as it improves, industry observers said. In addition to generative AI for infrastructure-as-code like Project Wisdom, observability could see LLMs play an increased role in the future.

"If it's got the ability to access your data and it understands when you're saying -- 'Connect this data to this data, to this data, and then show me the result' -- then you have a conversational interface for getting reports on business metrics, server performance metrics, whatever," said Rob Zazueta, a freelance technical consultant in Concord, Calif.

Test generation and automation for resilience workflows, such as chaos engineering and security penetration testing, could also be suited for generative AI, according to another IT expert.

"When we think about resilience, you're looking at what could potentially go wrong," said Chris Riley, senior manager of developer relations at marketing tech firm HubSpot. "Wouldn't it be cool if there was a way to truly test everything that could go wrong? What are the test cases that we could never even imagine?"

It'll put more emphasis on being able to ask the right questions and less weight on knowing the technical details of how to translate those questions into the specific tool that you're using.
Robert NishiharaCo-founder and CEO, Anyscale

An AI bot could perform repetitive testing work that humans often don't have time for, Riley said.

"If you had a virtual penetration analyst or a virtual bug bounty bot, it could just continually be poking around at things, seeing what works, what doesn't work, and maybe even testing documentation with real world scenarios," Riley said. "There are a lot of interesting use cases around identifying gaps … versus waiting to hear about them from somebody."

AI and machine learning are already used by AIOps products for IT incident response. But generative AI could improve post-mortem analyses on unstructured data such as chat files, audio calls and other natural language communication, said Robert Nishihara, co-founder and CEO at Anyscale, a service provider that hosts infrastructure used to train large AI models, including GPT-4.

"With audio data [such as] recorded sales calls … [we are] able to ask questions like, 'Why did they choose [us]?' or -- if we lost the deal -- 'Why did they go with a competitor?' instead of having to rewatch the call and figure that out," he said. "The same kind of principle applies to things like IT incidents or [software] bugs that are being filed and issues that are being tracked."

There may be cases where generative AI can make complex technologies accessible to people without deep expertise and alleviate tech skills gaps, Nishihara said.

"As the tooling and AI improves, I think it will make it possible for generalists to do some [previously specialized] roles," he said. "It'll put more emphasis on being able to ask the right questions and less weight on knowing the technical details of how to translate those questions into the specific tool that you're using."

Generative AI can take on tasks but not replace humans

That type of skill -- asking the right questions and specifying the series of steps needed to solve a complex problem -- are areas where generative AI isn't close to replacing humans, Nishihara said.

"On a five-year timeframe, I certainly don't see software engineers getting fully replaced," he said. "I think it will just make a lot of things easier … [and] enable a lot more people to build software applications faster."

Given it's still early in the hype cycle for generative AI, some of the challenges with training and maintaining LLMs are being glossed over, said Andy Thurai, an analyst at Constellation Research.

For example, the data sets and infrastructure resources required to train an LMM are enormous, Thurai said.

"Individual enterprises may not have enough data to train an LLM," he said. "That's only potentially possible with large hyperscalers -- Google, Azure or AWS might have enough data to do that."

Services such as the ChatGPT API make it easier to work with existing LLMs. But training a new model to target IT applications would require a separate, costly effort, he said. Ultimately, the benefits of training an LLM to generate results such as infrastructure-as-code templates might not justify that level of investment.

"If there are only 10 possible ways to deploy, [IT pros] can just have samples of that. Why would they need an AI model?" Thurai said.

There's also the question of whether enterprises would trust AI to take the place of a software developer, SRE or platform engineer. It was an idea already proposed five years ago by AIOps vendors that embraced NoOps, which has yet to be adopted by most mainstream enterprises.

Generative AI isn't likely to change that, either, Thurai said. Instead it will act as an accelerator in areas such as generating and reading technical documentation.

"There are a couple of example applications by AI21 Labs called Wordtune and Wordtune Read that are pretty good options to create or suggest documentation," Thurai said. "If you have a full 40-page documentation, for example, and you're an executive, you don't want to read all that. WordTune Read can boil it down, and then say, 'These are the three areas you need to read.'"

To summarize the effect generative AI will have on IT work, Thurai cited a quote from another AI expert interviewed by HBO's Last Week Tonight on Feb 26.

"I think if done right, it's not going to be AI replacing lawyers. It's going to be lawyers working with AI replacing lawyers who don't work with AI," said Erik Brynjolfsson, director of Stanford's digital economy lab, in the Last Week Tonight segment.

The same will hold true for IT ops, Thurai said.

"It's equivalent to using steroids as a player in a sports league," he said. "You have a distinct advantage."

Tech News This Week 04-07-2023

Beth Pariseau, senior news writer at TechTarget, is an award-winning veteran of IT journalism. She can be reached at [email protected] or on Twitter @PariseauTT.

Next Steps

Tech news this week: Edge IoT, GPT-4 and Tableau integration

How generative AI for business enhances collaboration

Generative AI ethics: 8 biggest concerns

Assessing different types of generative AI applications

The emerging usability of ChatGPT in software development

Dig Deeper on Systems automation and orchestration