Getty Images/iStockphoto

OpenAI releases new models that provide better reasoning

While reasoning is a needed move toward workflows based on AI agents, it does not solve the problem of bias in large language models. Good data is a way to mitigate that problem.

OpenAI on Thursday released new reasoning models in preview.

OpenAI o1 is a series of AI models designed to spend more time thinking or processing information before responding, according to the ChatGPT creator. The models have learned to refine their thinking process and try different strategies during training, OpenAI said.

The company also introduced OpenAI o1-mini, a cheaper reasoning model that is effective at coding.

The new models have so far received a mixed reception.

A step forward

Better-reasoning AI models are a step toward agentic models and AI agents -- AI systems that perform tasks with little or no human intervention. Many vendors, notably Google, are also working toward creating better-reasoning models.

"It is definitely a step forward, showing a clear evolutionary pattern for the generative AI models," Gartner analyst Arun Chandrasekaran said. "Agents require many different capabilities, one of which is clearly reasoning. This is definitely one of the key pillars toward enabling more autonomous agents in the future."

Other than reasoning, AI agents also need memory and orchestration, he continued.

While OpenAI seems to be introducing reasoning to the models with reinforcement learning techniques, it has yet to reveal any deep technical details about the models. Reinforcement learning teaches large language models and small language models to achieve better results using certain actions that penalize bad results and reward good ones.

While the new OpenAI models are good at reasoning, which is particularly useful for math and coding, it's not clear how enterprises would use the models.

Better reasoning sounds good, but for enterprises, having to navigate the slower response time and the higher cost of the o1 and o1-mini models compared with faster LLMs might be challenging, Chandrasekaran said.

OpenAI has yet to reveal cost details for the models.

Solving for bias

While better reasoning for AI models is desirable, it does not eliminate the problem of bias.

Some models that use chain-of-thought prompting, in which models are asked to perform step-by-step reasoning before providing a final answer, often tend to base their reasoning on biased data.

Because of this, what OpenAI and other AI creators might need is not special models that can reason, but rather to address model accuracy and hallucination with better data, said Mark Beccue, an analyst at TechTarget's Enterprise Strategy Group.

The better the data is, the better the models can answer questions.
Mark BeccueAnalyst, Enterprise Strategy Group

"The better the data is, the better the models can answer questions," he said.

It's better to train models on better data and name the source of that data, according to Beccue. "It's transparency," he said.

When the source is named, it gives clarity as to how the model reasons.

"What's the point of having reasoning, better results?" Beccue continued. "If you're reasoning on faulty data, it's still bad. Let's start with the foundation, which is good data."

The need for both good data and better reasoning is telling of the balance that needs to take place in the generative AI market between innovation and ethics, said Alon Yamin, co-founder and CEO of Copyleaks, a vendor of AI-based text analysis software.

"You want to make sure that on the one hand, we keep having new, better models," Yamin said, adding that reasoning is an example of such innovation. "From the ethics side, [it's] making sure you understand what's the information that you're training on -- what the limitations are, and identifying and mitigating potential risks to come with the models."

ChatGPT Plus and Team users can now access o1 models in ChatGPT and through the OpenAI API. ChatGPT Enterprise and Edu users will get access to the models next week. Some developers can start prototyping the models today.

Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.

Dig Deeper on AI technologies