whyframeshot - stock.adobe.com

OpenAI restructure move unsurprising, but raises concerns

The for-profit organization will lead to more freedom for the organization and its investors. However, it could cause enterprises to question if the vendor is committed to safe AI.

OpenAI's move to restructure into a for-profit organization benefits the generative AI vendor and investors eyeing the company.

But while a nonprofit portion of OpenAI would continue to exist, it will no longer affect the for-profit operations of the restructured company, which was founded in 2015 as a nonprofit and formed a for-profit subsidiary in 2019.

The move to restructure, widely reported this week, also means CEO Sam Altman will receive equity in the for-profit, which would be chartered as a benefit corporation, also known as a B Corp. Other AI B corporations include OpenAI competitor Anthropic and Sama.

A shaky company

The development comes nearly a year after OpenAI's board voted to fire Altman for lack of candor. While the board later rehired Altman, the incident was followed by the departure of many board members.

Co-founder Ilya Sutskever left in May to start an AI safety company called Safe Superintelligence. SSI recently raised $1 billion in venture funding.

Most recently, on Wednesday, CTO Mira Murati said she was leaving the company. Later that day, Altman revealed that chief research officer Bob McGrew and research leader Barret Zoph also quit. Altman has said he plans to reveal a successor to Murati soon.

The move to restructure also comes as a group of high-profile investors are expected to soon invest in OpenAI in a multi-billion-dollar funding round. Investors include Nvidia, Apple, the United Arab Emirates AI fund MGX, Thrive Capital and Microsoft, OpenAI's principal financial backer up to now. Valued at about $80 billion, with the new funding OpenAI could reach a valuation of as much as $150 billion.

A complicated structure

The restructuring is not surprising because the current setup of OpenAI is so convoluted, said Northeastern University AI adviser Michael Bennett.

"It's complicated in terms of management priorities and ability to move," Bennett said. "Considering the valuation of OpenAI, it is complicated from the perspective of a garden variety of investor."

Investors interested in the AI vendor would have to understand how the nonprofit board controls OpenAI, what it means for how the vendor operates and the regulatory constraints it operates under, Bennett added.

It's not surprising that they would make this move to become a benefit corporation.
Michael BennettAI policy adviser, Northeastern University

"Given the amount of money that's involved with this type of company, given the incredible interest and only growing interests of would-be investors, it's not surprising that they would make this move to become a benefit corporation," he continued.

Openness and ethical obligations

However, the restructuring comes with questions about the future of OpenAI's technology.

As a vendor at the forefront of the generative AI market since it unveiled the large language model (LLM) ChatGPT in November 2022, OpenAI has received criticism about the ways it has released its generative AI technology even with a nonprofit board.

Most notably, X and xAI owner Elon Musk has attacked the AI vendor for going back on its promise of being open and for not releasing information about the data, weights and source code it uses to train its models.

Musk's criticism became more formal when, as a co-founder of OpenAI, he filed a lawsuit against the company alleging breach of contract.

OpenAI, along with other generative AI vendors, has been faulted for not doing enough to control or reduce harmful LLM hallucinations, biased model outputs and data, and even guard against the possibility that generative AI could harm humanity.

While it's hard to predict whether OpenAI will become less open or perhaps reckless with how it develops and releases its proprietary generative AI technology, it's clear that the vendor will become freer to operate as it wishes, Bennett said.

"They will be able to probably more quickly and almost certainly be able to move in ways that are more explicitly consistent with what benefits their shareholders and equity holders," he said.

Moreover, OpenAI as a for-profit entity would also be free from the morals bounds it previously held to be committed to doing good when releasing AI technology, Bennett added.

"To move away from that nonprofit status would free them up to shrink those ethics commitments and those ethics requirements," he said.

The role of governance

However, OpenAI's structure should not define whether the vendor creates safe and ethically sound AI technology, said Veera Siivonen, co-founder and chief commercial officer of Saidot, an AI governance vendor.

"The role of regulation should make sure that they have boundaries to what they're doing," Siivonen said.

Other major players in the generative AI market, such as Anthropic and Cohere, are for-profit, she added, as, of course, are tech giants Google, Meta and AWS, which all develop and sell the technology.

"It would be a bit unfair to ask OpenAI to stick to a model where they are not allowed to make a profit," she said. "That means that they don't have the ability to attract the investments, either."

Regulation is one way to make sure vendors such as OpenAI are committing to creating safe systems, and OpenAI has said it would do that, Siivonen noted.

On Wednesday, OpenAI was among 100 vendors that signed the European Union AI Pact, a voluntary commitment to apply the principles of the EU's AI Act.

OpenAI is also motivated by its customers, Siivonen said.

"If they don't have safety measures, they will not be able to earn in the field in long term," she said. "If they lose the trust of businesses, then they won't have the money from the businesses, and they won't have the investors behind them in the long term."

"That's probably even a stronger incentive than just some not very clear corporate structure that they've been having," she continued.

Dilemma for enterprises

While keeping the trust of its customers is a motivator, enterprises might still be cautious about OpenAI's long-term stability and commitment to ethical AI, said Futurum Group analyst Dion Hinchcliffe.

With fewer checks from the nonprofit side, OpenAI may accelerate product releases faster and focus more on competing in the commercial AI space. This could mean faster rollouts but lead to big ethical considerations, Hinchcliffe continued.

"Faster innovation cycles like this might appeal to companies focused on rapid AI adoption from the top AI vendor," he said. "Customer loss would depend on how trust is managed as the transition unfolds."

Moreover, the recent exit of many staff members also raises concerns about OpenAI's commitment to safety and governance, he added.

"If that's part of the reason they left, which seems possible with what we know, it could signal a pivot away from prioritizing safety-first approaches, making large customers reassess trust in OpenAI as a responsible vendor," Hinchcliffe said.

Meanwhile, OpenAI continues to release new products.

On Thursday, the AI vendor introduced a new moderation model, omni-moderation-latest.

The new model is built on GPT-4o and supports text and image outputs. Users can employ it to check if images or text are potentially harmful.

It is free for developers through the Moderation API.

Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.

Dig Deeper on AI business strategies