What is semantic search? What is data poisoning (AI poisoning) and how does it work?
X

GPT-4o explained: Everything you need to know

OpenAI's GPT-4o is a multimodal large language model that supports real-time conversations, Q&A, text generation and more. The vendor also offers GPT-4o mini.

The foundation of OpenAI's success and popularity is the company's GPT family of large language models (LLMs), including GPT-3 and GPT-4, alongside the company's ChatGPT conversational AI service.

OpenAI announced GPT-4 Omni (GPT-4o) as the company's new flagship multimodal language model on May 13, 2024, during the company's Spring Updates event. As part of the event, OpenAI released multiple videos demonstrating the intuitive voice response and output capabilities of the model.

In July 2024, OpenAI launched GPT-4o mini, its most advanced small model.

What is GPT-4o?

GPT-4o is the flagship model of the OpenAI LLM technology portfolio. The o stands for "omni" and isn't just some kind of marketing hyperbole, but rather a reference to the model's multiple modalities for text, vision and audio.

The GPT-4o model marks the next evolution of the GPT-4 LLM that OpenAI first released in March 2023. This isn't the first update for GPT-4 either, as the model got a boost in November 2023 with the debut of GPT-4 Turbo. The GPT acronym stands for Generative Pre-trained Transformer. A transformer model is a foundational element of generative AI, providing a neural network architecture that can understand and generate new outputs.

GPT-4o goes beyond GPT-4 Turbo in terms of both capabilities and performance. As was the case with its GPT-4 predecessors, GPT-4o can be used for text generation use cases, such as summarization and knowledge-based Q&A. The model is also capable of reasoning, solving complex math problems and coding.

The GPT-4o model introduces a new rapid audio input response that -- according to OpenAI -- is like that of a human, with an average response time of 320 milliseconds. The model can also respond with an AI-generated voice that sounds human.

Rather than having multiple separate models that understand audio, images -- which OpenAI refers to as vision -- and text, GPT-4o combines those modalities into a single model. As such, GPT-4o can understand any combination of text, image and audio input and respond with outputs in any of those forms.

The promise of GPT-4o and its high-speed audio multimodal responsiveness is that it enables the model to engage in more natural and intuitive interactions with users.

OpenAI has had a series of incremental updates for GPT-4o since it was first released in May 2024. In August 2024, support was added for structured outputs that let the model generate code responses that work within a specified JSON schema. The most recent GPT-4o update came on November 20, 2024, providing a maximum token output of 16,384, up from 4,096 when the model was first released in May 2024.

What is GPT-4o mini?

As is the case for the full version, OpenAI's GPT-4o mini has a 128K context window with a maximum token output of 16,384 tokens. Training data for GPT-4o mini also goes through October 2023. What differentiates GPT-4o mini from the full model is its size, which lets it run faster and at lower cost. OpenAI does not currently publicly reveal the parameter count size of its models.

According to OpenAI, GPT-4o mini is smarter and 60% cheaper than GPT-3.5 Turbo, which had previously been the vendor's smaller and faster model variant.

In terms of textual intelligence, GPT-4o mini outperformed GPT-3.5 Turbo on the Measuring Massive Multitask Language Understanding (MMLU) benchmark with a score of 82% vs. 69.8%.

For developers, GPT-4o mini is an attractive option for use cases that don't require the full model, which is more expensive to operate. The mini model is well suited for use cases where there is a high volume of API calls, such as customer support applications, receipt processing and email responses.

GPT-4o mini is available in text and vision models for developers with an OpenAI account through the Assistants API, Chat Completions API and Batch API. As of July 2024, GPT-4o mini replaced GPT-3.5 Turbo as the base model option in ChatGPT. It is also an option for ChatGPT Plus, Pro, Enterprise and Team users.

What can GPT-4o do?

At the time of its release, GPT-4o was the most capable of all the OpenAI models in terms of both functionality and performance.

The many things GPT-4o can do include the following:

  • Real-time interactions. The GPT-4o model can engage in real-time verbal conversations without any real noticeable delays.
  • Knowledge-based Q&A. As was the case with all prior GPT-4 models, GPT-4o has been trained with a knowledge base and can respond to questions.
  • Text summarization and generation. As was the case with all prior GPT-4 models, GPT-4o can execute common text LLM tasks, including text summarization and generation.
  • Multimodal reasoning and generation. GPT-4o integrates text, voice and vision into a single model, allowing it to process and respond to a combination of data types. The model can understand audio, images and text at the same speed. It can also generate responses via audio, images and text.
  • Language and audio processing. GPT-4o has advanced capabilities in handling more than 50 different languages.
  • Sentiment analysis. The model understands user sentiment across different modalities of text, audio and video.
  • Voice nuance. GPT-4o can generate speech with emotional nuances. This makes it effective for applications requiring sensitive and nuanced communication.
  • Audio content analysis. The model can generate and understand spoken language, which can be applied in voice-activated systems, audio content analysis and interactive storytelling.
  • Real-time translation. The multimodal capabilities of GPT-4o support real-time translation from one language to another.
  • Image understanding and vision. The model can analyze images and videos, allowing users to upload visual content that GPT-4o will understand, explain and provide analysis for.
  • Data analysis. The vision and reasoning capabilities let users analyze data contained in data charts. GPT-4o can also create data charts based on analysis or a prompt.
  • Software development. GPT-4o can generate new code for an application, as well as analyze and debug existing code.
  • File uploads. Beyond the knowledge cutoff, GPT-4o supports file uploads, letting users analyze specific data for analysis.
  • Memory and contextual awareness. GPT-4o can remember previous interactions and maintain context over longer conversations.
  • Large context window. With a context window supporting up to 128,000 tokens, GPT-4o can maintain coherence over longer conversations or documents, making it suitable for detailed analysis.
  • Reduced hallucination and improved safety. The model is designed to minimize the generation of incorrect or misleading information. Enhanced safety protocols make sure outputs are appropriate and safe for users.

The capabilities provided by GPT-4o support many industry use cases, including the following:

  • Customer support. Organizations can use GPT-4o to build chatbots for real-time interactions.
  • Legal. GPT-4o can help law firms summarize cases, as well as perform legal research and contract reviews.
  • Medical. Health organizations can use GPT-4o for patient record analysis and diagnostic assistance.
  • Education and training. GPT-4o can help educational institutions create interactive tutorials and explain content.

How to use GPT-4o

There are several ways users and organizations can use GPT-4o.

  • ChatGPT Free. The GPT-4o model is available to free users of OpenAI's ChatGPT chatbot. ChatGPT Free users have restricted message access and will not get access to some advanced features, including vision, file uploads and data analysis.
  • ChatGPT Plus. Users of OpenAI's paid service for ChatGPT get full access to GPT-4o, without the feature restrictions that are in place for free users. As of December 2024, ChatGPT Plus costs $20 a month.
  • ChatGPT Pro. ChatGPT Pro -- the most advanced version of ChatGPT that includes the o1 models -- also provides access to GPT-4o. As of December 2024, ChatGPT Pro costs $200 a month.
  • ChatGPT Team. The group-oriented version of ChatGPT also provides access to GPT-4o. As of December 2024, ChatGPT Team costs $25 per user, per month.
  • API access. Developers can access GPT-4o through OpenAI's API. This allows for integration into applications to make full use of GPT-4o's capabilities for tasks. API pricing as of Dec. 2024 for GPT-4o is $2.50 per 1M input tokens and $10.00 per 1M output tokens. Pricing for GPT-4o mini is $0.150 per 1M input tokens and $0.600 per 1M output tokens.
  • Desktop applications. OpenAI has integrated GPT-4o into desktop applications, including a new app for Apple's macOS that was also launched on May 13.
  • Custom GPTs. Organizations can create custom GPT versions of GPT-4o tailored to specific business needs or departments. Custom models can be offered to users via OpenAI's GPT Store.
  • Microsoft OpenAI Service. Users can explore GPT-4o's capabilities in a preview mode within the Microsoft Azure OpenAI Studio that's designed to handle multimodal inputs, including text and vision. Variability is based on region. The global price for GPT-4o is $2.50 per 1M input tokens and $10.00 per 1M output tokens, while pricing for GPT-4o mini is $0.150 per 1M input tokens and $0.600 per 1M output tokens.

Limitations of GPT-4o

While GPT-4o provides many capabilities, the model has the following limitations:

  • Context window. GPT-4o's context window limit of 128K is sufficient for many tasks, but not all of them. Google claims its Gemini Pro 1.5 model has a 2 million token context window.
  • Knowledge cutoff. The training data for GPT-4o is limited to data from October 2023 or earlier.
  • Hallucination risk. As with any generative AI model, GPT-4o isn't perfect and does have a risk of generating AI hallucinations.
  • Bias. While OpenAI has tried to limit bias, there is still the potential for the model to provide responses that might not be representative of diverse perspectives.
  • Reasoning. GPT-4o is limited in its ability to reason, especially in comparison to OpenAI's o1 model family, which has been designed specifically to solve that challenge.
  • Security. The is a potential risk that GPT-4o can be influenced by adversarial inputs that aim to trick the model into an unexpected output.

GPT-4 vs. GPT-4 Turbo vs. GPT-4o

Here's a quick look at the differences between GPT-4, GPT-4 Turbo and GPT-4o:

Feature/Model GPT-4 GPT-4 Turbo GPT-4o
Release Date March 14, 2023 November 2023 May 13, 2024
Context Window 8,192 tokens 128,000 tokens 128,000 tokens
Knowledge Cutoff September 2021 December 2023 October 2023
Input Modalities Text, limited image handling Text, images (enhanced) Text, images, audio (full multimodal capabilities)
Vision Capabilities Basic Enhanced, includes image generation via Dall-E 3 Advanced vision and audio capabilities
Multimodal Capabilities Limited Enhanced image and text processing Full integration of text, image and audio

Editor's note: This article was updated in January 2025 to reflect updated product and pricing information and to improve the reader experience.

Sean Michael Kerner is an IT consultant, technology enthusiast and tinkerer. He has pulled Token Ring, configured NetWare and been known to compile his own Linux kernel. He consults with industry and media organizations on technology issues.

Next Steps

GPT-4o vs. GPT-4: How do they compare?

Generative AI vs. machine learning: How are they different?

Pros and cons of AI-generated content

Gemini 1.5 Pro explained: Everything you need to know

Foundation models explained

Dig Deeper on Artificial intelligence