6 generative AI predictions for 2024
Analyst Mike Leone predicts what's next for generative AI -- from open source to regulatory shifts -- offering a comprehensive view of where the industry is headed in 2024.
As 2023 comes to a close, it's time to look ahead to next year. It's easy to get lost in all the possibilities when it comes to generative AI -- what can we realistically expect in 2024?
Of course, we'll continue to see a focus on enterprise readiness from technology providers, and the race toward artificial general intelligence and the intensifying AI chip wars will likely make headlines. But with so much changing in the last year, nobody knows for sure where the industry is headed next. My six generative AI predictions for 2024 center around practicality: continued adoption, multimodality, open source, responsible AI, regulation and organizational exposure.
In 2024, I'll be dedicating most of my research to these topics. For readers whose organizations are affected by these developments, expect a year of sharing peer-level research, key vendor announcements, industry shakeups and collaboration among my colleagues at TechTarget's Enterprise Strategy Group (ESG).
It's been a crazy 2023, and I don't expect anything less from 2024. Buckle up.
1. There will not be a trough of disillusionment
It's somewhat common for industries to go from a hype phase, where generative AI currently sits, to a period of waning interest: the dreaded trough of disillusionment. Maybe experiments are failing, or the business side isn't seeing value quickly enough. But that will not happen with generative AI.
Do I think some companies will fail at incorporating generative AI into the business? Certainly. Are generative AI tools not yet perfectly enterprise ready? Sure. Do I think there are far too many models to choose from right now? Yes. But these factors aren't preventing organizations from adopting generative AI. There are so many instances of this technology benefiting enterprises in so many ways that I think we're already close to mainstream adoption.
In my view, there are two major hurdles that could slow down adoption: cost and regulation. But in light of the rise of open source AI, which is driving the market today, I feel that organizations will be set up to address regulations faster than ever before. And as technology providers and vendors deliver greater cost transparency, we'll start seeing more affordable approaches.
2. Multimodal AI will bring a new level of productivity to several industries
Multimodal AI enables end users to interact with generative AI in ways that go beyond just text -- think images, audio and video. We've already seen some promising capabilities in this area, including Google's recent announcement of the natively multimodal Gemini, but we're just getting started.
Enabling generative AI models to process multiple types of inputs simultaneously will yield significantly improved responses that are far more contextually aware. The ability to interact with a PDF, chart or graph, for example, will help many stakeholders within businesses. But what has me most excited is the potential impact in areas like manufacturing, engineering and healthcare. Imagine interacting with schematics, blueprints or genomics in a conversational way.
3. Open source will continue to pave the way for broader generative AI adoption
When ESG asked organizations how they planned to develop or use generative AI, over a third said that they planned to use an open source large language model (LLM). These organizations want control, transparency and the ability to customize open source models by incorporating their own data.
This is where retrieval-augmented generation (RAG) comes in. RAG is a technique used to address hallucinations and other inaccuracies by using semantic retrieval to bring in specific content, such as enterprise data.
Over the long term, I believe that many of the current models will converge in terms of performance, outputting similar enough responses that it won't matter what model you're using. The distinguishing factor will no longer be the models themselves, but the data they use -- not only the data that goes into training sets, but also the data used through RAG. Data will emerge as the biggest and most important differentiator for organizations.
4. 2024 will be the year of responsible AI
We're witnessing major alliances form with the goal of improving responsible AI capabilities through open innovation. We've already seen entirely new business units form within major corporations that are focused on responsibility and AI ethics. In 2024, we'll start seeing a standardization of responsible AI protocols and best practices focused on governance, safety, security and trust.
I believe we have significant work to do here. Too many organizations pushing a generative AI narrative simply don't have a good answer on how they're addressing responsible AI. Is it an internal governing body, something built in-house, a gap filled through one or many partners or all of the above?
And finally, there's my favorite question: Does responsible AI need to be a product or service, or is it a mentality and belief that must be instilled throughout an organization? The latter approach is similar to the "If you see something, say something" mindset.
5. National and global AI regulation will come fast
As of the end of 2023, we already have the EU AI Act. We're also seeing early stages of U.S. government involvement with the Biden administration's recent executive order. 2024 will be the year of establishing clear laws to govern AI, and I believe this will happen at both a national and global level.
Regulation will come quickly in 2024. But how fast can enterprises respond without outright hitting the stop button? Enterprises will need to show a new level of agility and adaptability as they balance productivity and efficiency gains with compliance and security concerns.
6. Organizations will be exposed for overstepping or lacking policies
In 2024, we'll see organizations that will make headlines for the wrong reasons -- violations of new regulations, security breaches, inadvertent sharing of private data or reliance on inaccurate AI responses.
This trend, which is already emerging in the tail end of 2023, is just getting started. I believe several Fortune 500 companies will fall specifically because they're playing fast and loose with generative AI. And note that the companies I'm predicting will fall are not the providers of LLMs or the enablers of the technology -- this will be outside of the technology industry and due simply to in-house negligence.
Mike Leone is a principal analyst at TechTarget's Enterprise Strategy Group, where he covers data, analytics and AI.
Enterprise Strategy Group is a division of TechTarget. Its analysts have business relationships with technology vendors.