Getty Images/iStockphoto
A look at expectation vs. reality of generative AI in 2024
Last year, expectations for generative AI for the coming year centered on regulation, open source and multimodality. While some predictions were accurate, others were wrong.
If 2023 was the first year of generative AI for both the enterprise and consumer markets, and the accompanying profusion of large language models, then 2024 was when LLMs and generative AI systems quickly grew and matured.
At the start of the year, people in the AI industry looked ahead and saw regulation of generative AI systems, a flowering of multimodal models, and whispers about artificial general intelligence, or AGI, continuing unabated. A year later, the landscape looks a lot different.
Here's a look at four scenarios for 2024, and how the reality actually turned out.
1) Responsible AI and regulation
Expectation: The expectation for 2024 was that regulation would continue to develop at the slow pace it usually does with technology.
Despite President Joe Biden's executive order on AI safety in November 2023 and the EU AI Act going into effect in 2024, few observers expected AI regulation to make huge progress in 2024.
Reality: The reality was not far from what was expected, said Michael Bennett, an AI policy adviser at Northeastern University's Institute for Experiential AI. "We didn't see any kind of major movement at the federal level," Bennett said.
Michael BennettAI policy adviser, Northeastern University's Institute for Experiential AI
Moreover, the nonbinding effect of the executive order makes enforcement hard, RPA2AI analyst Kashyap Kompella said.
However, the executive order did result in the government asking each federal agency to name a chief AI officer and other small changes.
Not only were there no meaningful regulatory measures in 2024, but there also was little resolution of some of the AI lawsuits filed at the start of the year. Lawsuits such as the New York Times suit against OpenAI and Microsoft were not expected to come to quick resolutions, Kompella said.
Some of the legal conflicts did lead to other developments outside the court system. For example, OpenAI signed deals with several other news publishers to train its models on their content. Also, vendors including Getty Images and Adobe started offering compensation for artists.
"We can see everybody's making this calculus that content is not going to be free forever. There is going to be cost associated with content," Kompella said.
Looking ahead: Some see even less regulation of AI in 2025 under President-elect Donald Trump. That will likely lead to more regulation at the state and local levels, Bennett said.
"We will probably see more states continue to figure out what types of policies make sense for them and probably move toward at least testing the water for new regulation," he said. "We will probably see municipalities doing the same thing in areas such as policing or employment."
As for the lawsuits by content creators against LLM vendors, more will emerge about the rights of each group, Kompella said.
"We can expect certain guidelines to be made that will clarify whether the LLM creators have the rights to use content from different companies, like news media sites, from published works, music, movies, videos and whatnot," he said.
2) Open source and multimodality
Expectation: With the release of Google Gemini in 2023 and the growing popularity of Meta's Llama family of open models, observers predicted that 2024 would see more multimodal models, which can take in text or images and generate images, video or audio.
Some also expected more open source AI models to pop up.
Reality: As expected, generative AI models have gotten better than the previous year.
"All have progressed in so many ways in terms of reasoning, accuracy, reliability, their ability to handle workloads, context window, multimodal," said AJ Sunder, chief information and product officer at Responsive, vendor of an AI platform for proposals and responses to proposals. "The models have gotten so much better now than even compared to the start of the year."
However, models have not reached true multimodality, Bennett said.
"In terms of a powerful GenAI system that can take data in any format, multiple modes of data, and generate something like a short movie, it seems like the commercial research side of that effort continues to move," he said. "I haven't seen something break through as a comprehensive approach to multimodality."
And open source AI models did not flower as much as some anticipated. "We're still seeing quite a few closed systems," Bennett said.
While fully open source models that have seen widespread use are few, models like Llama, which is considerably open even though its data set is closed, have flourished. In June, French company Mistral AI, which gained market traction with its open models, raised more than $640 million and made inroads in the corporate market.
The big cloud providers also embraced the Llama models in their model gardens, with some introducing open models of their own.
3) Finding the value in GenAI
Expectation: Many expected 2024 to be the year when enterprises would find value in GenAI and gain ROI.
Reality: While there was a move from ideation to experimentation, most organizations have yet to gain value from GenAI, Sunder said.
"People are more and more open to trusting the technology, using the technology, even still maintaining humans in the loop," he said. "A lot of the ROI promise, I don't think, has been realized."
While LLMs have boosted productivity in teams like marketing or helped developers code faster, it hasn't truly transformed business processes, said Mark Greene, senior vice president and general manager at robotic process automation vendor UiPath.
"The change management aspect was underestimated," Greene said, adding that many observers overestimated AI models' accuracy and how fast enterprises would adopt generative AI systems.
4) Impact on jobs
Expectation: Many also feared the loss of jobs due to generative AI. They feared GenAI would wipe out artists, journalists, designers and others.
Reality: In some respects, generative AI has changed some jobs, but the arrival of the new and fast-evolving technology has not necessarily led to fewer employees on enterprise rolls.
For example, developers and coders now have AI assistants helping them with their jobs, but most have not lost their jobs due to AI. "They are using the technology, but it hasn't negatively impacted their jobs and roles yet," Sunder said.
On the other hand, AI technology is replacing job roles for which there are scarce workers, Gartner analyst Daryl Plummer said. These are jobs in areas such as supply and logistics, content creation, customer service, law, manufacturing and others.
"The notion that AI can do more and more of the things that human beings do is just a fact," Plummer said. "The people who are going to be more valuable are going to be the people who know to use AI to get something done most effectively."
Esther Shittu is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.