What is the future of machine learning?
Machine learning is changing how we write code, diagnose illnesses and create content, but implementation requires careful consideration to maximize benefits and mitigate risks.
Machine learning algorithms generate predictions, recommendations and new content by analyzing and identifying patterns in their training data. These capabilities power widely used technologies such as digital assistants and recommendation algorithms, as well as popular generative AI tools including ChatGPT and Midjourney.
Although these high-profile examples of generative AI have recently captured public attention, machine learning has promising applications in contexts ranging from big data analytics to self-driving cars. And adoption is already widespread: In a recent survey by consulting firm McKinsey & Company, 72% of respondents said their organization had adopted AI in some capacity.
Many of the underlying concepts powering today's machine learning applications date back as far as the 1950s, but the 2010s saw several advances that enabled this widespread business use:
- Access to data. Increasing digitization of documents and internet adoption led to the big data revolution. Together with improvements in the technologies used to store, manage and analyze data, these factors made it easier to create machine learning models, which require extensive training data.
- More powerful and flexible compute. With more efficient, powerful GPUs, AI developers could train models on larger data sets more quickly. And the rise of cloud computing let organizations explore machine learning without heavy upfront investments by enabling them to access specialized AI infrastructure as needed.
- Algorithmic and technical developments. A spate of breakthroughs across machine learning subfields, notably deep learning, led to increased interest in AI and revealed new use cases. In particular, the emergence of the transformer model architecture paved the way for today's popular generative AI tools.
These developments moved AI and machine learning into the mainstream business realm. Popular AI use cases in today's workplaces include predictive analytics, customer service chatbots and AI-assisted quality control, among many others.
This article is part of
What is machine learning? Guide, definition and examples
Artificial intelligence vs. machine learning
Although the two terms are sometimes used interchangeably in practice, machine learning is a subset of the broader field of AI. Whereas AI is a broad, somewhat nebulously defined concept -- essentially, a machine's ability to perform tasks typically associated with human intelligence -- machine learning is a specific form of AI that involves training algorithms to detect patterns and relationships in data and adjust their actions accordingly.
Key trends that could shape the future of machine learning
Machine learning developments are expected across a range of fields over the next five to 10 years. The following are a few examples:
- Customer experience. Machine learning algorithms can create adaptive, personally tailored customer experiences, such as individualized promotions. Virtual assistants and chatbots can also automate repetitive customer service tasks, such as responding to customers' emails and chats.
- Supply chain management. Predictive algorithms can analyze historical data to forecast future demand, optimizing inventory management and minimizing waste. Machine learning algorithms can also automatically track purchases, shipments and the like, and alert companies to possible issues.
- Financial services. In finance, machine learning facilitates tasks such as risk modeling, portfolio management and market forecasting. And applying machine learning algorithms to customers' transaction data helps banks automatically detect potential fraudulent activity and suggest personalized financial products.
- Cybersecurity. To combat ever more sophisticated hacking techniques, machine learning is positioned to become integral to cybersecurity. Machine learning algorithms can detect vulnerabilities in an organization's security posture and analyze traffic for anomalies that could indicate a cyber attack.
Among the many possible use cases for machine learning, several areas are expected to lead adoption, including natural language processing (NLP), computer vision, machine learning in healthcare and AI-assisted software development.
Natural language processing
With the rise in popularity of ChatGPT and other large language models (LLMs), it's no surprise that NLP is currently a major area of focus in machine learning. Potential NLP developments over the next few years include more fluent conversational AI, more versatile models and an enterprise preference for narrower, fine-tuned language models.
As recently as 2018, the machine learning field was overall more focused on computer vision than NLP, said Ivan Lee, founder and CEO of Datasaur, which builds data labeling software for NLP contexts. But over the past year, he's noticed a significant shift in the industry's focus.
"We're seeing a lot of companies that maybe haven't invested in AI in the last decade coming around to it," Lee said. "Industries like real estate, agriculture, insurance -- folks who maybe haven't spent as much time with NLP -- now they're trying to explore it."
As with other fields within machine learning, improvements in NLP will be driven by advances in algorithms, infrastructure and tooling. But NLP evaluation methods are also becoming an increasingly important area of focus.
"We're starting to see the evolution of how people approach fine-tuning and improving [LLMs]," Lee said. For example, LLMs themselves can label data for NLP model training. Although data labeling can't yet -- and likely shouldn't -- be fully automated, he said, partial automation with LLMs can expedite model training and fine-tuning.
Because language is essential to so many tasks, NLP has applications in almost every sector. For example, LLM-powered chatbots such as ChatGPT, Google Gemini and Anthropic's Claude are designed to be versatile assistants for diverse tasks, from generating marketing collaterals to summarizing lengthy PDFs.
But specialized language models fine-tuned on enterprise data could provide more personalized and contextually relevant responses to user queries. For example, an enterprise HR chatbot fine-tuned on internal documentation could account for specific company policies when answering users' natural language questions.
"The beauty of [ChatGPT] is that you can try a million different queries," Lee said. "But in the business setting, you really want to narrow that scope down. ... It's OK if [a recipe generator] doesn't tell me the best travel plans for San Antonio, but it better be fully tested and really good at recipes."
Computer vision
Outside of LLMs, computer vision is among the top areas of machine learning seeing an uptick in enterprise interest, said Ben Lynton, founder and CEO of AI consulting firm 10ahead AI.
Like NLP, computer vision has applications across many industries. Adoption will likely be spurred by improvements in algorithms such as image classifiers and object detectors, as well as increased access to sensor data and more customized models. Possible trends in the realm of computer vision include the following:
- Facial recognition for security use cases such as access control and identity verification.
- Object detection for inventory management and quality control inspections in manufacturing and retail.
- Advanced driver assistance systems that use machine learning to perform tasks such as automatically moderating vehicle speed, monitoring driver alertness, and warning of possible collisions or lane drift.
In generative AI, image generators such as Dall-E and Midjourney are already used by consumers as well as in marketing and graphic design. Moving forward, advances in video generation could further transform creative workflows.
Lee is particularly interested in multimodal AI, such as combining advanced computer vision capabilities with NLP and audio algorithms. "Image, video, audio, text -- using transformers, you can basically boil everything down to this core language and then output whatever you'd like," he said. For example, a model could create audio based on a text prompt or a video based on an input image.
Healthcare and medicine
Machine learning in healthcare could accelerate medical research and improve treatment outcomes. Promising areas include early disease detection, personalized medicine and scientific breakthroughs thanks to powerful models such as the protein structure predictor AlphaFold.
Hospitals have begun adopting clinical decision support systems powered by machine learning to aid in diagnosis, treatment planning and medical imaging analysis. AI-assisted analysis of complex medical scans could help expedite diagnosis by identifying abnormalities -- for example, correcting corrupted MRI data or detecting heart defects in electrocardiograms.
A top area of focus is developing and automating patient engagement efforts with machine learning, said Hal McCard, an attorney at law firm Spencer Fane whose practice focuses on the healthcare sector. Machine learning models can analyze massive health data sets to better predict patient outcomes, enabling healthcare providers to develop more personalized, timelier interventions that improve adherence to treatment regimens.
Here, the biggest shift isn't the underlying technology, but rather the scale. "Machine learning for data-predicted solutions and population health is not a new concept," McCard said. Rather, what's changing is "how it's being applied and the effectiveness with which you can take that output and ... use it to drive better outcomes in patient care and clinical care."
NLP has also shown some promise for clinical decision-making and summarizing physician notes. But for the foreseeable future, implementation still requires close human oversight. In a recent study, ChatGPT provided inappropriate cancer treatment recommendations in a third of cases and produced hallucinations in nearly 13%.
"When it comes to clinical decision-making, there are so many subtleties for every patient's unique situation," said Dr. Danielle Bitterman, the study's corresponding author and an assistant professor of radiation oncology at Harvard Medical School, in a release announcing the findings. "A right answer can be very nuanced, and not necessarily something ChatGPT or another large language model can provide."
Software development and IT
Machine learning is also changing technical roles by automating repetitive coding tasks and detecting potential bugs and security vulnerabilities.
Emerging generative tools such as ChatGPT, GitHub Copilot and Tabnine can produce code and technical documentation based on natural language prompts. Although human review remains essential, offloading initial writing of boilerplate code to AI can significantly speed up the development process.
Combined with NLP advances, this could mean more interactive, chat-based functionalities in future integrated development environments. "I think in the future, coding editors will have a more chat-based interface," said Jonathan Siddharth, co-founder and CEO of Turing, a company that matches developers with employers seeking technical talent. "Every software engineer [will have] an AI assistant beside them who they can talk to when they code."
In software testing and monitoring, using machine learning techniques such as anomaly detection and predictive analytics to parse log data can help IT teams predict system failures or identify bottlenecks. Similarly, AIOps tools could use machine learning to automatically scale resource allocations based on usage patterns and suggest more efficient infrastructure setups.
Although prompt engineering -- the practice of crafting queries for generative AI models that yield the best possible output -- has recently been a hot topic in the tech community, it's unlikely that prompt engineer will continue to be a standalone role as generative models become more adept. "I don't think 'prompt engineer' is going to be a position you're hired for," Lee said.
However, experts do expect fluency with generative AI tools to become an increasingly important skill for technical professionals. "In terms of software engineering, I think we're going to see more and more engineers who know how to prompt LLMs," Siddharth said. "I think it'll be a broadly applicable skill."
Potential challenges ahead
Enthusiasm and optimism abound, but implementing machine learning initiatives requires addressing practical challenges and security risks as well as potential social and environmental harms.
Adopting machine learning raises pressing ethical concerns, such as algorithmic bias and data privacy. On the technical side, integrating machine learning into legacy systems and existing IT workflows can be difficult, requiring specialized skills in machine learning operations, or MLOps, and engineering. And whether emerging generative AI tools will live up to the hype in real workplaces remains unclear.
In NLP, for example, human-level fluency remains far off, and it's unclear whether AI will ever truly replicate human performance or reasoning in open-ended scenarios. LLMs can generate convincing text, but lack common sense or reasoning abilities. Similar limitations exist for other areas, such as computer vision, where models still struggle with unfamiliar data and lack the contextual understanding that comes naturally to humans. Given these limitations, it's important to carefully choose the best machine learning approach for a given use case -- if machine learning is indeed necessary at all.
"There is a class of problems that can be solved with generative AI," Siddharth said. "There is an even bigger class of problems that can be solved with just AI. There's an even bigger class of problems that could be solved with good data science and data analytics. You have to figure out what's the right solution for the job."
Moreover, generative AI is often riskier to implement than other types of models, particularly for sectors such as healthcare that deal with highly sensitive personal data. "The generative solutions that seek to produce original content and things like that, I think, carry the most risk," McCard said.
In evaluating potential privacy risks for external products, McCard emphasized the importance of understanding a model's data sources. "It's a little bit unrealistic to think that you're going to get insight into the algorithm," he said. "So, understanding that it might not ultimately be possible to understand the algorithm, then I think the question turns to the data sources and the rights of use in the data sources."
The massive amounts of training data that machine learning models require make them costly and difficult to build. Increasing use of compute resources following the generative AI boom has strained cloud services and hardware providers, including an ongoing shortage of GPUs. Additional demands for specialized machine learning hardware could further exacerbate these supply chain issues.
This ties into another foundational challenge, Lynton said: namely, the state of a company's IT infrastructure. He gave the example of a consulting engagement with an industry-leading client whose accounting, procurement and customer data systems were all on different legacy systems that could not communicate with one another -- including two that were discontinued and unmaintainable.
"It's slightly terrifying, but this is a very common situation for many large companies," Lynton said. "The reason this is an issue for AI adoption is that most leadership teams are unaware of their IT landscape and so may budget X million [dollars] for AI, but then get little to no ROI because a great deal of it is wasted in trying to patch together their systems."
McCard raised a similar concern about readiness for implementation in healthcare settings. "I have serious questions about the ability of some of these tools, especially the generative tools, to interface or be interoperable with the electronic health record systems and other systems that these health systems are currently running," he said.
The hardware and computations required for machine learning initiatives also have environmental implications, particularly with the rise of generative AI. Training machine learning models involves high levels of carbon emissions, particularly for large models with billions of parameters.
"The main risk is that people generate more carbon by training AI models than their sustainability use cases could ever save," Lynton said. "This wasn't a huge problem with the more established fields ... but now with [generative AI], it's a real threat."
To mitigate climate impacts, Lynton suggests focusing on choosing computationally efficient models and measuring the environmental impact of an AI project from start to finish. More efficient model architectures mean shorter training times and, in turn, a smaller carbon footprint.
The future of enterprise machine learning adoption
Enterprise interest in machine learning is on the rise, with investment in generative AI alone projected to grow four times over the next two to three years.
"AI transformation is the new digital transformation," Siddharth said. "Every large enterprise company that I meet is thinking about what their AI strategy should be." Specifically, he said, companies are interested in exploring how AI and machine learning can help them better serve users or improve operational efficiency.
But in practice, not all companies are ready for the transition. For many enterprises, AI and machine learning are "still surprisingly a box-ticking exercise or risky investment, more than an accepted necessity," Lynton said. In many cases, an order comes down to "incorporate AI into the business," without further detail on what that actually entails, he said.
Moving forward, ensuring success in enterprise machine learning initiatives will require companies to slow down, rather than rushing to keep up with the AI hype. Start small with a pilot project, get input from a wide range of teams, ensure the organization's data and tech stacks are modernized, and implement strong data governance and ethics practices.
Lynton suggests taking an automation-first strategy. Rather than going full steam ahead on a complex AI initiative, start by automating five manual, repetitive and rules-based processes, such as a daily data entry task that involves entering a report from a procurement system into a separate accounting system.
These automation use cases are typically cheaper and show ROI more quickly compared with complex machine learning applications. Thus, an automation-first strategy can quickly give leaders a picture of their organization's readiness for an AI initiative -- which, in turn, can help prevent costly missteps.
"In a lot of cases, the outcome is that they are not [ready], and it's more important to first upgrade [or] combine some legacy systems," Lynton said.
Editor's note: This article has been updated to incorporate the latest survey data on enterprise AI adoption rates.
Lev Craig covers AI and machine learning as the site editor for TechTarget Editorial's Enterprise AI site. Craig graduated from Harvard University with a bachelor's degree in English and has previously written about enterprise IT, software development and cybersecurity.