metamorworks - stock.adobe.com
Possible reasons for Meta disbanding its responsible AI team
After releasing more generative AI tools this year, the tech giant made a controversial move by dismantling its unit dedicated to responsible AI.
As the push for government regulation of AI and AI safety gains momentum, Meta has moved in the opposite direction on responsible AI.
The Facebook and Instagram parent company late last week restructured what remains of its responsible AI team, some of whose members were laid off earlier this year, The Information first reported.
Former team members will now be split up, with some moving to the company's generative AI product team while others will work on Meta's AI infrastructure.
Meta and generative AI
The move comes as the social media giant has pivoted much of its strategy toward generative AI.
On Nov. 16, Meta showcased its advances in AI-powered image and video generation with new tools that give users more control in image editing with text instructions. It also added a new method for text-to-video generation. Both tools are built on Meta's image generation foundation model, Emu.
Meta has also seen success with its open source large language model Llama since its release in February, positioning it as a responsible alternative to LLMs from OpenAI and Google.
Reasons
With Meta's current influence in the open source market with Llama, it seems counterproductive for the company to eliminate its responsible AI team However, there may be several reasons the decision make sense, said Michael Bennett, responsible AI lead at Northeastern University.
"If we put on an internal lens, there may be … different nonexclusive reasons that this might make sense from a managerial perspective," Bennett said.
One reason is that the responsible AI team needed to contribute more effectively to the revenue, making this a simple business decision, he said.
Another may be that this is a strategic redistribution that will enable members of the team to continue their work on responsible AI, while being reassigned to other parts of Meta.
Kashyap KompellaCEO, RPA2AI Research
"There's definitely some logic to shifting the AI ethics team closer to the engineering teams," said Kashyap Kompella, CEO of analyst firm RPA2AI Research. "Both teams can definitely gain from a better understanding of their respective goals, methods and constraints."
Kompella said he advises his clients to take a hybrid approach with AI ethics. The idea is to involve AI ethics teams with product and engineering teams during the development stage, he said.
However, there is still a need for a separate AI governance layer for risk, legal and compliance oversight, Kompella added.
Another reason Meta may have made its decision is to support its "year of efficiency," said Forrester analyst Nikhil Lai.
"It's streamlining and refocusing on core products," Lai said.
It also shows that Meta, along with other tech giants like Microsoft and Google, is focusing on not only one form of AI, he added.
"There's predictive AI, generative AI, functional AI and more," he continued. "All of which can cause workflow and analytical efficiencies that, ultimately, boost advertisers' productivity and performance on and off Meta."
But the move could have to do -- perhaps counterintuitively -- with recent developments in AI regulation as Meta looks ahead to larger compliance efforts rather than internal AI governance, Bennett said.
President Joe Biden last month released a sweeping executive order to establish new AI safety and security standards .
Also, European countries are now negotiating rules and regulations that will guide foundation models and other AI technology in the anticipated EU AI Act.
"It seems like we are slowly seeing governments give time for action on their promise to regulate generative AI and other forms of AI," Bennett said.
With these government regulations on the horizon, Meta and other technology vendors anticipate the need to comply with these government regulations and laws and incorporate them into their technology.
"It may be that they're predicting that, say, within another six months to 12 months, there will be more of a need to comply with the law," Bennett added. "That will be a more pressing concern than kind of looking for guidance from internal applied ethics sources in the form of a responsible AI team."
However, despite these steps, tech companies will still find it challenging to comply with regulation initiatives in a way that offers business value, said Deep Analysis analyst Alan Pelz-Sharpe.
"We have to be realistic and accept that without strong policing of AI, the rush to profit will always win out," he said. "This is not to say that tech companies want to deliver harmful AI, but the cost and time to ensure they are truly safe does not make business sense."
The systems are so complex that only "enforced, constantly evolving and monitored polices can ensure they deliver responsible and ethical [tools]" Pelz-Sharpe continued.
A possible AI catastrophe
Also, with this decision, Meta could be facing the risk of a major AI problem, especially as the U.S. approaches an election year amid fears of large-scale social media manipulation using advanced AI.
However, there is hope that even if Meta faces a catastrophe during a period of limited regulation that it would lead to even better policies, Bennett said.
"If something bad happens in that period, we have to hope it wouldn't be a wasted catastrophe," Bennett said. "It would hopefully accelerate the speed, the movement with which we are headed toward meaningful substantitive regulation."
Meta is not the only tech giant to make changes to its responsible AI team. In March, Microsoft laid off its entire ethics and society team.
Meta did not respond to a request for comment.
Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems.