What is enterprise AI? A complete guide for businesses
Enterprise AI tools are transforming how work is done, but companies must overcome various challenges to derive value from this powerful and rapidly evolving technology.
Enterprise AI refers to the artificial intelligence technologies used by companies to transform their operations and gain a competitive advantage. These AI tools include machine learning, natural language processing, robotics and computer vision systems -- sophisticated hardware and software that is difficult to implement and rapidly evolving. Enterprise AI applications also require specialized skills plus large quantities of high-quality data.
Companies are increasingly interested in these demanding AI technologies because of their potential to reinvent critical business processes in ways that other forms of enterprise IT can't -- through intelligent automation, optimization, cost reduction and improved decision-making. Indeed, McKinsey's latest annual research on the state of enterprise AI found that its use is surging: Fueled by generative AI, business adoption of AI jumped to 72% of surveyed organizations in 2024, up from about 50% in each of the past six years. Half the organizations surveyed said they've adopted AI in two or more business functions, up from a third in 2023.
But the risks associated with integrating AI technologies into existing business systems and processes are substantial, and developing the metrics to ensure AI success has been challenging. These obstacles are due not only to the complexity of the AI tools themselves but also to the nature of enterprise systems per se. Enterprise systems, whether traditional or cutting edge, must be scalable, dependable, secure, smoothly integrated into workflows and accepted by employees. In addition, AI talent is scarce and expensive.
Enterprise AI is also subject to hype and awash in newly developed tools and services from AI vendors. Which approaches will become the standard baseline technologies is still unnervingly uncertain. What should companies do to ensure their adoption of enterprise AI drives revenue and becomes a competitive differentiator?
This guide to enterprise AI provides the building blocks for becoming successful AI implementers, users and innovators. It points AI novices to introductory explanations of how AI works and the various types of AI. For more experienced businesses, it lays out how to build a successful AI strategy, potential AI use cases, steps for implementing AI, the big pitfalls to watch for and the breakthroughs that are driving the enterprise AI field forward, among other topics. Hyperlinks to TechTarget articles that provide more detail and insights on these topics are included throughout the guide.
Why is AI important in the enterprise?
Not so long ago, businesses were exhorted to embrace digital transformation -- widely perceived to be a matter of survival in an economy orchestrated by internet giants Google, Amazon, Uber and the like. Enterprise AI represents yet another paradigm shift in business transformation. The application of artificial intelligence in the enterprise is profoundly changing the way businesses work. Companies are incorporating AI technologies into their business operations with the aim of saving money, boosting efficiency, generating insights and creating new markets.
There are AI-powered enterprise applications to enhance customer service, maximize sales, sharpen cybersecurity, free up workers from mundane tasks, improve existing products and point the way to new products. It is hard to think of an area in the enterprise where AI won't have an impact. However, enterprise leaders determined to use AI to improve their businesses and ensure a return on their investment face these overarching challenges:
- The domain of artificial intelligence is changing rapidly because of the tremendous amount of AI research being done. The world's biggest companies, research institutions and governments around the globe are supporting major research initiatives on AI.
- There are a multitude of potential AI use cases to consider; AI can be applied to any problem facing a company or to humankind writ large.
To reap the value of AI in the enterprise, business leaders must understand how AI works, where AI technologies can be aptly applied in their businesses and where they cannot -- a daunting task that starts with the need for effective data management.
Impact of AI in the enterprise
The value of AI to 21st-century businesses has been compared to the strategic value of electricity in the early 20th century when electrification transformed industries like manufacturing and created new ones such as mass communications. "AI is strategic because the scale, scope, complexity and the dynamism in business today is so extreme that humans can no longer manage it without artificial intelligence," Chris Brahm, senior advisory partner at Bain & Company, told TechTarget.
In the near term, AI's biggest impact on small businesses and large companies alike stems from its ability to automate and augment jobs that today are done by humans.
Labor gains realized from using AI are expected to expand upon and surpass those made by current workplace automation tools. And by analyzing vast volumes of data, AI won't simply automate work tasks but will generate the most efficient way to complete a task and adjust workflows on the fly as circumstances change.
AI is already augmenting human work in many fields. For example, augmented intelligence capabilities assist doctors in medical diagnoses and help contact center workers deal more effectively with customer queries and complaints. In security, AI is being used to automatically respond to cybersecurity threats and prioritize those that need human attention. Project managers are using AI-powered software to prioritize and schedule work, estimate costs and allocate resources. IT teams are using AIOps to automate the identification and resolution of common IT issues. Banks are using AI to speed up and support loan processing and to ensure compliance.
The advent of generative AI dramatically expands the type of jobs AI can automate and augment. Businesses and consumers have quickly adopted GenAI technology, using applications such as ChatGPT, Gemini and Copilot to conduct searches, create art, compose essays, write code and make conversation.
While enterprise AI is creating new jobs, such as AI product managers, AI engineers and AI ethics officers, its potential to eliminate many jobs done today by humans is of major concern to workers, as described in the following sections on benefits and risks of AI.
The basics of how AI works
Artificial intelligence is the simulation by computer systems of how our brains learn, reason, self-correct and create. As explained in TechTarget's comprehensive definition of artificial intelligence, AI systems generally work by ingesting large amounts of training data, analyzing that data for correlations and patterns and using these patterns to make predictions about future states. Learning involves creating rules, or algorithms, with instructions for AI tools on how to complete specific tasks; reasoning is their ability to choose the right algorithm for the job; self-correction is AI's capacity to continuously learn and adapt based on new data; creativity involves making new content from existing data. There also are four types of AI: reactive, limited memory, theory of mind and self-aware.
What are the benefits of AI in the enterprise?
The embrace of enterprise AI for its potential to drive growth, innovation and other business advantages is near universal. In a 2024 "AI in the Enterprise Survey" commissioned by digital transformation company UST, for example, 93% of 600 senior IT decision-makers at large companies said AI is essential to success. A late 2023 survey conducted for research firm Frost & Sullivan's "Global State of AI, 2024" report found that 89% of organizations in multiple industry verticals believe AI and machine learning will help them achieve their business priorities. Other surveys report similar levels of enthusiasm for AI among business and IT leaders.
Increasing revenue and improving operational efficiency are key drivers for investments in AI. Here are some additional widely cited benefits of AI for businesses:
- Improved customer experiences. AI's ability to speed up and hyperpersonalize customer service is a top enterprise reason for investing in AI tools. Businesses across industries use recommendation engines to generate real-time personalized suggestions for products, services or content. Voice recognition systems and natural language processing (NLP) are used to streamline call routing, convert speech to text and mimic natural conversation.
- Improved monitoring. AI's capacity to process data in real time means organizations can implement near-instantaneous monitoring of business operations. For example, factory floors are using image recognition software and machine learning models in quality control processes to monitor production and flag problems.
- Improved speed of business. AI enables shorter business cycles by automating internal and customer-facing processes. Reducing the time to move from one stage to the next, such as from designing a product to commercialization, results in faster ROI.
- Better quality and reduced human error. Organizations can reduce errors and improve compliance by using AI on tasks previously done manually or with traditional automation tools, such as extract, transform and load software. When integrated into robotic process automation software, AI and machine learning tools add the ability to continuously improve process performance. Financial reconciliation is an example of an area where machine learning has substantially reduced costs, time and errors.
- Better talent management. Companies are using enterprise AI software to streamline the hiring process, root out bias in corporate communications and boost productivity by screening for top-tier candidates. Advances in conversational AI and new language modeling techniques give chatbots the ability to provide personalized service to job candidates and employees. Internally, HR departments use AI tools to gauge employee sentiment, spot high performers, identify pay discrepancies and deliver more engaging workplace experiences.
- Business model innovation and expansion. Digital natives such as Amazon, Airbnb, Uber and others used AI to help implement their new business models. Opportunities to remake and expand business models have also opened up for traditional companies in retail, banking, insurance and other industries as they refine their data and AI strategies.
Challenges and risks of AI in the enterprise
Some of the challenges and risks associated with enterprise AI stem from the same mistakes that can sabotage any technology deployment: inadequate planning, insufficient skill sets, lack of alignment with business goals and poor communication.
For these types of issues, companies should lean on the best practices that have guided the effective adoption of other new technologies. But AI also comes with unique risks that many organizations are ill-equipped to deal with -- or even recognize -- due to the nature of the technology and how fast it is evolving.
One of the biggest barriers to effectively using AI in the enterprise is well documented: worker mistrust. Professional services firm KPMG found that 61% of the respondents to its "Trust in Artificial Intelligence: Global Insights 2023" survey were either ambivalent about or unwilling to trust AI. A 2024 Salesforce survey of nearly 6,000 knowledge workers worldwide showed that 56% of AI users found it difficult to get what they wanted from the technology and 54% said they didn't trust the data used to train AI systems, putting increased AI adoption by those users at risk.
Dispelling mistrust of AI is easier said than done, as shown by the following current weaknesses inherent in AI technologies.
Unintentional bias. Like any data-driven tool, AI algorithms depend on the quality of data used to train the AI model. Therefore, they are subject to biases inherent in the data, leading to faulty results, socially inappropriate responses and even greater mistrust.
Unexplainable results. Unexplainable results are a significant challenge in AI systems due to their inherent black box nature. Explainability -- understanding how an algorithm reaches its conclusion -- is not always possible with AI systems, given the way they are configured with many hidden layers that self-organize the weights used as parameters to create a response.
Hallucinations. An algorithm's behavior, or output, in a so-called deterministic environment can be predicted from the input. Most AI systems today are stochastic or probabilistic, meaning they rely on statistical models and techniques to generate responses that the algorithm deems probable in a given scenario. But the results are sometimes fantasy, as experienced by many users of ChatGPT, and are referred to as AI hallucinations.
The adoption of shadow AI -- the unauthorized use of AI tools at work -- is another risk enterprises must address. The "2024 Work Trend Index Annual Report" from Microsoft and LinkedIn, released in May 2024, found that 78% of AI users are bringing their own AI tools to work, highlighting the need to develop AI governance polices.
Other risks that companies must confront include running afoul of AI laws and proposed regulations; the malicious use of AI to automate and amplify cyberattacks; and the potential for internal morale issues and social unrest due to AI-induced job losses. While some jobs are likely immune to being replaced by AI, many others could increasingly be taken over by the technology.
AI and big data
AI and big data play a symbiotic role in 21st-century business success. Large data sets, including a combination of structured, semistructured and unstructured data, are the raw material for yielding the in-depth business intelligence and analytics that drive improvements in existing business operations and lead to new business opportunities. Companies cannot fully capitalize on these vast data stores, however, without the help of AI. For example, deep learning, a subset of machine learning, uses neural networks to process large data sets and identify subtle patterns and correlations that can give companies a competitive edge.
Simultaneously, AI relies on big data for training and generating insights. AI's ability to make meaningful predictions -- to get at the truth of a matter rather than mimic human biases -- requires not only vast stores of data but also data of high quality. Cloud computing environments have helped enable AI applications by providing the computational power needed to process and manage the required data in a scalable and flexible architecture. In addition, the cloud provides wider access to enterprise users, democratizing AI capabilities.
Current business applications of AI
A Google search for "AI use cases" turns up millions of results, an indication of the many enterprise applications of AI. Its use cases span industries, from financial services -- an early adopter -- to healthcare, education, marketing and retail. AI has made its way into every business department, from marketing, finance and HR to IT and business operations. Additionally, the use cases incorporate a range of AI applications: natural language generation tools used in customer service, deep learning platforms used in self-driving cars, facial recognition tools used by law enforcement, and many others.
Here is a sampling of how various industries and business departments are using AI.
Financial services. The financial sector uses AI to process vast amounts of data to improve almost every aspect of business, including risk assessment, fraud detection and algorithmic trading. The industry also automates and personalizes customer service through the use of chatbots and virtual assistants, including robo-advisors designed to provide investment and portfolio advice.
Manufacturing. Collaborative robots, aka cobots, are working on assembly lines and in warehouses alongside humans, functioning as an extra set of hands. Other AI use cases in manufacturing include using AI to predict maintenance requirements, machine learning algorithms to identify purchasing patterns for predicting product demand, and GenAI for coding programmable logic controllers that automate production processes.
Agriculture. The agriculture industry is using AI-powered sensors, drones and image recognition systems for real-time pest detection and to monitor soil conditions with the aim of producing healthier crops. Agricultural bots equipped with computer vision, machine learning, robotics and other advanced tools can perform a range of farming tasks, from planting seeds and watering crops to precision harvesting.
Law. The document-intensive legal industry is using machine learning to mine data and predict outcomes; computer vision to classify and extract information from documents; chatbots to handle routine client inquiries; and NLP, machine learning and knowledge-based systems for contract review and analysis.
Education. In addition to automating the tedious process of grading exams, AI is being used to assess students and adapt curricula to their needs, paving the way for personalized learning.
IT service management and cybersecurity. IT organizations apply machine learning to ITSM data to gain a better understanding of their infrastructure and processes. They use the named entity recognition component of NLP for text mining, information retrieval and document classification. AI techniques are applied to multiple aspects of cybersecurity, including anomaly detection, solving the false-positive problem and conducting behavioral threat analytics.
Marketing. Marketing departments use a range of AI tools, including chatbots and virtual assistants for customer support, recommendation engines for analyzing customer data and generating personalized suggestions, and sentiment analysis software for brand monitoring.
2024 Nobel Prizes recognize groundbreaking work in AI
The 2024 Nobel Prize in Chemistry was awarded to Google AI's DeepMind researchers Demis Hassabis and John Jumper and University of Washington professor David Baker for using AI to predict the 3D structure of proteins. The Nobel Prize in Physics was awarded to "godfather of AI" Geoffrey Hinton and Princeton's John Hopfield for their work on neural networks.
How have AI use cases evolved?
Many computer-assisted tasks in the enterprise are not automatic but require a certain amount of intelligence. What qualifies as an intelligent machine, however, is a moving target. Use cases once considered to require AI, such as basic data cleansing or simple demand forecasting, quickly became standard data processing practices as advanced computational techniques were rolled into widely available tools. More recently, the previously complex AI problem known as lemmatization -- reducing a word to its root form to improve accuracy in NLP -- has now become a standard feature of NLP pipelines. Here are three areas where rapidly evolving AI techniques, alone and in combination, are creating new enterprise AI use cases in real time:
- Rapid advances in large language models -- e.g., LLMs like OpenAI o1 with billions of parameters -- mark a new era in business applications in which generative AI models can write engaging text, generate photorealistic images and conjure up movie scripts on the fly. Use cases for GenAI technology include writing email responses, designing physical products and buildings, and optimizing new chip designs.
- Agentic AI systems are capable of autonomous action and decision-making. They are designed to pursue goals independently, without direct human intervention, using advanced techniques such as reinforcement learning and evolutionary algorithms. Today, AI systems that exhibit agentic behaviors are typically specialized for particular tasks and limited in scope for safety and usability reasons. For example, Walmart is investing in agentic AI to improve and automate its supply chain management.
- Embodied AI refers to AI systems that can interact with and learn from their physical environments using technologies that include sensors, motors, machine learning and NLP. This type of intelligence is more akin to a reflex than a concept: It learns to match its output to its sensory inputs. Prominent examples of embodied AI are autonomous vehicles, humanoid robots and drones.
Narrow AI vs. general AI vs. artificial superintelligence
Enterprise leaders responsible for setting AI strategy should be familiar with three AI concepts: narrow AI, also known as weak AI; artificial general intelligence (AGI); and artificial superintelligence (ASI).
Narrow AI accounts for most of the AI applications today. It refers to models trained to perform specific tasks, such as language translation, spam filtering and image recognition. Intelligent agents like Apple's Siri and Amazon's Alexa, the recommendation engines used by Netflix and Amazon, Tesla's Autopilot and Google Photos are examples of narrow AI.
Artificial general intelligence is generally defined as AI capable of performing any intellectual task a human being can do, including the ability to reason about and think up complex problems it was not programmed to solve. This capacity to think like humans does not exist today. How close we are to achieving AGI is a matter of heated debate.
Artificial superintelligence refers to AI that possesses intellectual powers exceeding those of humans across a wide range of categories and endeavors. It also does not exist. AI programs like the chess engine Stockfish that are superior to humans in a single domain fall well short of ASI. The singularity, an idea popularized by futurist Ray Kurzweil, refers to a hypothetical future in which AI acquires a superhuman level of intelligence that is out of control and irreversible.
Steps for implementing artificial intelligence in the enterprise
As with any emerging technology, the rules of AI implementation are still being written. Some general guidelines from industry leaders in AI to bear in mind: An experimental mindset will yield better results than a "big bang" approach. Start with a hypothesis, followed by testing and rigorous measurement of results -- and iterate. Also, build data fluency. Understanding how data powers your business processes is essential and, according to experts, more challenging than deploying the technology.
To identify AI opportunities, research how other companies in and outside your industry are using AI. Evaluate your internal capabilities and provide AI training and support to employees. Select vendors and partners based on not only their financial stability, technical capabilities and scalability but also on their compatibility with your systems. Continuously improve AI models and processes.
The chart "Key steps for successful AI implementation" lists 13 specific steps to follow, each of which is explained in this blueprint for successful AI implementation.
Managing AI projects
Due to their complexity, data-centricity, iterative nature and potential impact, managing AI projects is different from managing other types of IT initiatives. Potential problems include inflated and unrealistic expectations, the lack of quality data, the inability to implement at scale and tepid user adoption. The chart "12-step program for successfully managing AI projects" lists best practices for undertaking such tasks.
Enterprise AI vendors and tools market
The enterprise AI vendor and tool ecosystem addresses multiple AI-related capabilities. The following summary is based on extensive industry research into the main enterprise AI tool categories and factors in rankings from consultancies Gartner and Forrester.
The large cloud AI platforms from AWS, Google, IBM and Microsoft each come with tools for developing and deploying AI apps. These are a good fit if your enterprise already has a large cloud presence on one platform. Products from the big cloud providers generally fall into two tiers:
- Data science and machine learning (DSML) platforms targeted at data scientists.
- Low-code modules more suited for a wider range of users.
For example, Microsoft Azure AI Studio provides comprehensive tooling, while the vendor's Azure AI Services provides prebuilt AI modules and Azure Machine Learning can be used to build machine learning models.
AWS similarly has Amazon SageMaker, a managed machine learning service in its public cloud that developers can use to build a production-ready AI pipeline, plus a set of AI services advertised as not requiring machine learning experience.
IBM's large portfolio of artificial intelligence products and services is mainly built on Granite foundation models and Watsonx technology and supports both DSML platforms and prebuilt modules.
Google brands all its AI offerings for developers and business users under Google AI. Its Google AI Studio product for building generative AI prototypes does not require machine learning expertise.
Several of the available DSML platforms from other vendors provide a comprehensive set of tools for creating, deploying and managing AI models. MLOps platforms, in distinction, are more focused on streamlining the process of putting AI models into production and then maintaining and monitoring them over time.
Listed alphabetically, here are some of the top DSML platform vendors: Altair, which purchased RapidMiner in 2022; C3 AI; Databricks; Dataiku; DataRobot; H2O.ai.
Top MLOps vendors include Domino Data Lab, Iguazio and Intel, which offers an Intel Tiber AI Studio tool previously known as Cnvrg.io. Kubeflow, an open source technology for managing machine learning workflows on Kubernetes, is also available.
Although many platforms specialize in one kind of capability, it should be noted that most of the larger players are branching out to support the entire spectrum of AI development, deployment monitoring and AI-as-a-service capabilities.
What is an AI PC?
An AI PC is a personal computer equipped with hardware and software designed to run AI applications and tasks without the need for cloud services or external software. A core focus is supporting generative AI models and services. Vendors of AI PCs typically include a CPU, a GPU and a neural processing unit -- a dedicated hardware component for AI acceleration. Microsoft, Dell, HP and Asus among others make AI PCs.
AI trends
It is hard to overstate the scope of development being done on artificial intelligence by vendors, governments and research institutions -- and how quickly the field is changing. The rapid evolution of algorithms accounts for many recent advancements, notably the new -- and disruptive -- AI large language models that are redefining the modern search engine.
Equally impressive and worthy of enterprise attention is the spate of new tools designed to automate the development and deployment of AI. Moreover, AI's push into new domains, such as conceptual design, small devices and multimodal applications, will expand AI's repertoire and usher in game-changing abilities for many more industries.
To take full advantage of these trends, IT and business leaders must develop a strategy for aligning AI with employee interests and with business goals. Streamlining and democratizing access to AI, while challenging, is also essential.
The following is a rundown of some current AI developments, which are described in more depth in this article on top AI and machine learning trends:
- AutoML. Automated machine learning is getting better at labeling data and automatic tuning of neural net architectures. By automating the work of selecting and tuning a neural network model, AI will become cheaper and new models will take less time to reach market.
- AI-enabled conceptual design. AI is being trained to play a role in fashion, architecture, design and other creative fields. AI models such as Dall-E, for example, are able to generate conceptual designs of something entirely new from a text description.
- Multimodal learning. AI is getting better at supporting multiple modalities, such as text, vision, speech and IoT sensor data, in a single machine learning model.
- Retrieval-augmented generation. RAG has emerged as a technique for reducing AI hallucinations by combining text generation with access to external information to provide context and improve accuracy.
- Customized enterprise generative AI models. While massive general-purpose GenAI tools such as ChatGPT and Midjourney resonate with consumers, businesses with specialized terminology -- such as legal, healthcare and finance companies -- are exploring smaller, narrow-purpose models built to handle their use cases.
- Computer vision. Less expensive cameras and new AI are creating opportunities to automate processes that previously required humans to inspect and interpret objects. Although challenges abound, computer vision implementations will be an ever more important trend in the near future.
- Increased attention to AI ethics. AI's capacity to generate and amplify disinformation via deepfakes and other sophisticated AI-generated content has put a spotlight on the importance of AI ethics and the need for AI transparency and accountability. Conversely, ethicists have raised questions about the danger of using AI to surveil, suppress and censor information.
The importance and challenge of responsible AI usage
As AI becomes incorporated into enterprise systems and more widely adopted by the public, the need for better understanding of AI risks has increased. This has led to calls for responsible AI and trustworthy AI. For example, NIST defines AI systems that are trustworthy as having the following characteristics: "valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed."
However, the extreme rapidity of AI tool adoption and its ongoing technical evolution makes it extremely difficult to pin down what exactly responsible AI usage means.
Governments, educational institutions and businesses worldwide are racing to set guidelines for responsible use. It remains to be seen whether these regulations will be able to guard against the potential ill effects of AI -- a list that includes job loss, bias and discrimination, misinformation, theft of intellectual property and enhanced cyberattacks. Moreover, there is a risk that such regulation could stifle innovation and damage the financial advantages that AI potentially offers.
The future of artificial intelligence
One of the characteristics that has set us humans apart over our several-hundred-thousand-year history on Earth is a unique reliance on tools and a determination to improve upon the tools we invent. Once we figured out how to make AI work, it was inevitable that AI tools would become increasingly intelligent. What is the future of AI? It will be intertwined with the future of everything we do. Indeed, it will not be long before AI's novelty in the realm of work will be no greater than that of a hammer or plow.
However, AI-infused tools are qualitatively separate from all the tools of the past -- which include beasts of burden as well as machines. We can talk to them, and they talk back. Because they understand us, they have rapidly invaded our personal space, answering our questions, solving our problems and, of course, doing increasingly more of our work.
This synergy is not likely to stop anytime soon. Arguably, the very distinction between what is human intelligence and what is artificial will probably evaporate. This blurring between human and artificial intelligence is occurring because of other trends in technology, which incidentally have been spurred by AI. These include brain-machine interfaces that skip the requirement for verbal communication altogether, robotics that give machines all the capabilities of human action, and a deeper understanding of the physical basis of human intelligence thanks to new approaches to unravel the wiring diagrams of actual brains.
Ultimately, our future then is one in which the enhancement of intelligence might be bidirectional, making both our machines and us more intelligent. That is, unless machines reach a superhuman level of intelligence and humanity becomes just another interesting experiment in the evolution of intelligence.
Origins of artificial intelligence and major milestones
The modern field of AI is often dated to 1956, when the term artificial intelligence was included in the proposal for an academic conference held at Dartmouth College that year. But the idea that the human brain can be mechanized is deeply rooted in human history.
Myths and legends, for example, are replete with statues that come to life. Many ancient cultures built humanlike automata that were believed to possess reason and emotion. By the first millennium B.C., philosophers in various parts of the world were developing methods for formal reasoning -- an effort built upon over the next 2,000-plus years by contributors that also included theologians, mathematicians, engineers, economists, psychologists, computational scientists and neurobiologists.
Below are some milestones in the history of AI and our ongoing, elusive quest to recreate the human brain:
- Rise of the modern computer. The prototype for the modern computer is traced to 1836 when Charles Babbage and Augusta Ada Byron, Countess of Lovelace, invented the first design for a programmable machine. A century later, in the 1940s, Princeton mathematician John von Neumann conceived the architecture for the stored-program computer: This was the idea that a computer's program and the data it processes can be kept in the computer's memory.
- Birth of the neural network. The first mathematical model of a neural network, arguably the basis for today's biggest advances in AI, was published in 1943 by the computational neuroscientists Warren McCulloch and Walter Pitts in their landmark paper, "A Logical Calculus of Ideas Immanent in Nervous Activity."
- Turing Test. In his 1950 paper, "Computing Machinery and Intelligence," British mathematician Alan Turing explored whether machines can exhibit humanlike intelligence. The Turing Test, named after an experiment proposed in the paper, focused on a computer's ability to fool interrogators into believing its responses to their questions were made by a human being.
- Historic meeting in New Hampshire. The 1956 summer conference at Dartmouth, sponsored by the Defense Advanced Research Projects Agency, or DARPA, was a gathering of luminaries in this new field. The group included Marvin Minsky, Oliver Selfridge and John McCarthy, who is credited with coining the term artificial intelligence. Also in attendance were AI notables Allen Newell and Herbert A. Simon, who presented their groundbreaking Logic Theorist -- a computer program capable of proving certain mathematical theorems that was referred to as the first AI program.
- Fruitful days for AI research. The promise of developing a thinking machine on par with human understanding led to nearly 20 years of well-funded basic research that generated significant advances in AI. Examples include Lisp, a language for AI programming that is still used today; Eliza, an early NLP program that laid the foundation for today's chatbots; and the groundbreaking work by Edward Feigenbaum and colleagues on Dendral, the first expert system.
- AI famine. When the development of an AI system on par with human intelligence proved elusive, funders pulled back, resulting in a fallow period for AI research from 1974 to 1980 that is known as the first AI winter. In the 1980s, industry adoption of knowledge-based systems, including expert systems, ushered in a new wave of AI enthusiasm only to be followed by another collapse of funding and support. The second AI winter lasted until the mid-1990s.
- Big data and deep learning techniques spark AI revival. Groundbreaking work in 1989 by Yann LeCun, Yoshua Bengio and Patric Haffner showed that convolutional neural networks (CNNs) could be applied to real-world problems, propelling an AI renaissance that continues to this day. Notable breakthroughs include the 2012 debut of deep CNN architectures based on work by eventual Nobel Prize winner Geoffrey Hinton and colleagues, which triggered an explosion of research on deep learning.
- AI captures public imagination. IBM's Deep Blue captivated the public in 1997 when it defeated chess master Garry Kasparov, marking the first time a computer triumphed over a reigning chess champion in a tournament setting. In 2011, an IBM cognitive computing system again took popular culture by storm when supercomputer Watson competed on Jeopardy! and outscored the TV game show's two most successful champions. In 2016, DeepMind's AlphaGo defeated the world's top Go player, drawing comparisons to Deep Blue's victory nearly 20 years earlier.
- Big leaps for machine learning. Ian Goodfellow and colleagues invented a new class of machine learning in 2014 called generative adversarial networks, changing the way images are created. In 2017, Google researchers unveiled the concept of transformers in their seminal paper, "Attention Is All You Need," revolutionizing the field of NLP.
- Rise of humanlike chatbots and dire warnings. In 2018, the research lab OpenAI, co-founded by Elon Musk, released Generative Pre-trained Transformer (GPT), paving the way for the dazzling debut of ChatGPT in November 2022. Four months later, Musk, Apple co-founder Steve Wozniak and thousands more urged a six-month moratorium on training "AI systems more powerful than GPT-4" to provide time to develop "shared safety protocols" for advanced AI systems. In May 2023, Hinton warned that chatbots like ChatGPT -- built on technology he helped create -- pose a risk to humanity.
Linda Tucci is an executive industry editor at TechTarget Editorial. A technology writer for 20 years, she focuses on the CIO role, business transformation and AI technologies.