Nabugu - stock.adobe.com

The year in AI: Catch up on the top AI news of 2024

As the new year approaches, the pace of AI development shows no signs of letting up. Get up to speed for 2025 by catching up on TechTarget Editorial's top AI news stories from 2024.

This December marks just over two years since ChatGPT's initial release, a period that's seen exponential AI growth across industries and sectors.

From lawsuits and regulatory milestones to a constant stream of product launches and infrastructure advancements, 2024 has been another pivotal year in artificial intelligence. Take a deep dive into the standout moments of 2024 -- the stories that defined the year and will shape AI's trajectory in 2025.

AI-powered competitors challenge Google's dominance in search

Google has long been the leading search engine, but the rise of search tools with AI capabilities spurred competition in 2024.

In January, AI search tool Perplexity AI raised $73.6 million in Series B funding. The tool has gained popularity among users for its natural language processing capabilities and ability to cite sources.

Established players are getting in on AI search too. In July, OpenAI introduced an early AI search engine prototype named SearchGPT, which incorporated ChatGPT's conversational features and was developed in partnership with publishers like Newsmax and Condé Nast. November saw the public release of the search tool, rebranded as ChatGPT Search.

Meta has also signaled intentions for AI search, announcing in October plans to create a search engine with AI-generated summaries. And Google itself has added generative AI features to its flagship search engine this year, including AI overviews, multimodal querying and search results organized by AI.

European Parliament passes the EU AI Act

After years of drafting and deliberation, the European Parliament approved the EU AI Act in March, establishing a regulatory framework for AI systems. The act, which emphasizes responsible AI use, data privacy and ethical standards, currently applies to all AI users in the EU and any AI providers conducting business within the region.

The legislation came into force in August 2024, with a tiered implementation timeline ranging from six months to over a year. Organizations affected by the act or aiming to prepare for future regulations are taking steps to become compliant.

Nvidia dominates AI chip market as rival manufacturers seek footholds

At Nvidia's GTC AI developer conference in March, the chip vendor unveiled designs for its new AI chip, Blackwell, promising increased compute power, support for larger models and enhanced inference capabilities, all with a smaller hardware footprint. Despite setbacks and rumors of overheating earlier in the year, CEO Jensen Huang confirmed in November that production is moving "full steam" ahead.

Nvidia remains the leading player in the AI chip market, with major generative AI developers like Google and Microsoft relying on its hardware. But competitors are beginning to gain ground.

At Computex 2024 in June, chipmakers AMD, Intel and Qualcomm showcased their recent advancements in the AI chip race alongside Nvidia. Intel, in particular, spent 2024 attempting to position itself as a serious alternative to Nvidia. In April, it unveiled the Gaudi 3 GPU, designed to rival Nvidia's H100 GPU for AI workloads. But technologists say Gaudi 3's generative AI support falls short of Nvidia's offerings, and leadership turmoil at the company is likely to undermine confidence too.

Publishers and artists push back on generative AI training practices

In December 2023, The New York Times sued OpenAI and its investor Microsoft, accusing them of copyright infringement. The lawsuit alleged that the companies used millions of Times articles as training data for their generative AI chatbots. According to the complaint, many of these articles were originally paywalled, but their contents became accessible via the chatbot in response to specific prompts.

The following month, OpenAI responded with a fair use defense, arguing that the use of publicly available Times materials did not constitute copyright infringement and that chatbot-generated outputs were distinct from the original articles. The lawsuit, which remains unresolved as 2024 draws to a close, spotlights the friction between copyright laws and the growing demand for training data to develop generative AI models.

Other copyright disputes have also made headlines throughout the year. In April, musicians issued an open letter to AI vendors, accusing them of infringing on intellectual property by training models on musicians' work without permission. In October, the Times sent a cease-and-desist letter to Perplexity AI, alleging unauthorized use of the Times' content.

Vendors release agentic AI products to meet growing interest

At the annual AI Summit New York conference this December, AI agents were top of mind for session speakers and attendees alike. But excitement about AI agents -- capable of autonomously navigating environments to pursue specific goals without direct human oversight -- remains tempered by caution. Attendees highlighted challenges such as defining clear task specifications and mitigating the risk of hallucinations.

As agentic AI gains momentum, vendors are taking note. In an August interview with TechTarget, Jason Gelman, Google Cloud's director of product management for Vertex AI, characterized agents as an early-stage technology with promising use cases.

Some vendors are already packaging AI agents for consumers. In August, Salesforce unveiled Agentforce, a platform for building custom AI agents. In September, at Oracle CloudWorld, Oracle launched over 50 AI agents for automating business processes. Most recently, in December, Google released its AI agent tool Google Agentspace just two days after Gemini 2.0, its new model for training agentic workflows.

Growing calls for AI safety spur vendor action, but concerns remain

In March, AI safety whistleblower Shane Jones, a Microsoft AI engineer, raised serious concerns over the safety of the company's Copilot Designer product. In letters to Federal Trade Commission Chair Lina Khan and Microsoft's board of directors, Jones alleged that the model produced images in violation of Microsoft's responsible AI principles.

Amid growing public concern about the safety of AI models, vendors took some steps to address issues. In March, OpenAI, Google and other AI vendors signed an open letter pledging to prioritize responsible AI development. But critics pointed out that the initiative lacked concrete action.

Some companies took more tangible measures later in the year. In September, Microsoft announced new safety tools for its generative AI products. In October, Adobe followed suit with a generative AI video model trained exclusively on licensed, commercially friendly content.

OpenAI expands its portfolio with new offerings

Already a leading AI vendor, OpenAI continued to expand its commercial footprint in 2024, with several notable additions to the product lineup:

  • Sora. In February, OpenAI introduced Sora, a multimodal model capable of producing complex video scenes from text prompts. After red-teaming the model throughout the year, the vendor released Sora Turbo, a faster version that produces short clips, to the general public in December.
  • GPT-4o. In May, OpenAI released GPT-4o, the newest iteration of its GPT large language model (LLM). Designed for multimodal applications, it offers improved speed and lower costs. In July -- following the release of small language models from Microsoft, Google and Mistral -- OpenAI debuted GPT-4o mini, a cheaper and faster option.
  • O1. In September, OpenAI introduced the O1 series of reasoning models, including a lightweight O1-mini version. These models are designed for decision-making, logic and strategizing tasks, an essential component of building autonomous AI agents.

OpenAI also made a significant organizational change in September, announcing plans to restructure as a for-profit entity. While the move offers flexibility for OpenAI and its investors, it has raised concerns about OpenAI's continued commitment to AI safety.

AI-powered disinformation raises security concerns

The 2024 U.S. presidential election season was marred by widespread AI-generated disinformation. In January, during New Hampshire's presidential primary, robocalls using a deepfake of President Joe Biden's voice urged residents to stay home and not vote. In August, another deepfake audio clip -- this time of Vice President Kamala Harris -- circulated on X, formerly Twitter, gaining further attention when it was reposted by X owner Elon Musk.

In response to events like these, legislators attempted to address concerns about AI-generated disinformation. In April, a Senate subcommittee convened to discuss the risks of deepfake AI and its potential ramifications for elections.

Meanwhile, some technology vendors developed products intended to curb the spread of deepfake content. In May, OpenAI introduced a deepfake detector, although its capabilities were limited to recognizing content generated by OpenAI models and its rollout was limited to a small group of disinformation researchers. In August, information security vendor Pindrop released a tool capable of identifying AI-generated speech in audio files.

Google releases Gemini and Gemma as generative AI race intensifies

In February, Google rebranded its AI chatbot Bard as Gemini and launched Gemini Advanced, a multimodal AI assistant powered by a new, larger foundation model.

Shortly thereafter, Google released a private preview of its newest LLM, Gemini 1.5 Pro, featuring a million-token context window -- the largest of any generative AI provider. Tapping into the growing open source market, the tech giant also introduced its Gemma series of models later that month.

These moves early in the year helped Google stay relevant in an ever-growing generative AI landscape. In the months that followed, other vendors debuted many proprietary and open source generative AI offerings, including SambaNova's Samba-1, Anthropic's Claude 3 and 3.5 Sonnet, Meta's Llama 3.1, and IBM's Granite 3.0.

Olivia Wisbey is the associate site editor for TechTarget Enterprise AI. She graduated with bachelor's degrees in English literature and political science from Colgate University, where she served as a peer writing consultant at the university's Writing and Speaking Center.

Dig Deeper on AI technologies