Getty Images

Canceled executive order clears way for unbridled GenAI growth

At first glance, the deregulation appears like a way to drive innovation forward. However, it will slow AI safety efforts and could impede innovation in AI technology in some ways.

President Donald Trump's elimination of generative AI safety rules for tech vendors signaled to the industry that it should proceed with unbridled development with virtually no regulation.

At the same time, Trump's inner circle includes some of the most public backers of the philosophical approach to generative AI that prioritizes accelerated progress over caution and ethical considerations. The shift has left AI safety advocates scrambling -- at least for now.

Hours after taking office on Jan. 20, Trump revoked an executive order, previously signed by former President Joe Biden, requiring AI developers to share safety test results with the U.S. government before AI systems are publicly released.

A few days later, on Jan. 23, the president released a new executive order that Trump said is intended to "remove barriers to American leadership in AI."

The new executive order revoked AI policies and directives that Trump said impeded AI innovation in the U.S. The order states that within the next 180 days, the assistant to the president for science and technology, special advisor for AI and crypto and assistant to the president for national security affairs will develop and submit an action plan that enhances U.S. global AI dominance.

That framework mirrors an AI economic blueprint arguing for minimal regulation that ChatGPT maker OpenAI released days before the inauguration. It came as leaders of the tech giants at the center of generative AI innovation -- including Microsoft, Meta, Google and Amazon -- contributed millions of dollars to Trump's inauguration.

More innovation

Tech leaders advocating accelerated generative AI development cheered Trump's first moves on AI as an unleashing of innovation.

Trumps's substitution of his own pro-AI executive order for Biden's AI safety policy means that there is now room for more innovation, said R "Ray" Wang, founder and CEO of Constellation Research.

"It was very hard to get anything done in the last administration because it was just anti-business," Wang said. Biden's policy emphasizing AI safety included an executive order on AI that wasn't effective because AI companies were thinking, "What is the least they can do, as opposed to what is the most they can do" to be compliant, he said.

A key provision of Biden's AI order mandated that tech vendors building the most powerful large language models and other generative AI systems submit details to the federal government about the platforms' underlying technologies before releasing them into the market.

It will be very, very short-sighted to put the guardrails to the side.
Merve HickokPresident, Center for AI and Digital Policy

But AI safety advocates, including some of the world's most prominent AI scientists and researchers, have raised alarms about the risks of generative AI technology, such as hallucinations, built-in biases and even the potential to break free of human control and endanger human life. Proponents of a more cautious approach to AI favor stringent safety guidelines to keep the technology in check.

"It will be very, very short-sighted to put the guardrails to the side," said Merve Hickok, president at the Center for AI and Digital Policy. "You're literally cutting the branch that you're sitting on. Once customers lose trust in AI, then there's not going to be enough adoption by the customers, by the market, and that is going to drain investment in the long run."

Trump's revocation of Biden's AI executive order gives AI vendors intent on moving forward on AI -- building ever bigger and more powerful models with few restraints -- the opportunity to do so, said Mel Morris, CEO of Corpora.ai, a vendor that built an AI research engine. He added that it would lead to innovation with artificial general intelligence (AGI), artificial superintelligence and agentic AI.

The relaxation of legislation will free resources to propel more adventurous innovation and allow products to come to market quicker and with more expansive capabilities.
Mel MorrisCEO, Corpora.ai

"The relaxation of legislation will free resources to propel more adventurous innovation and allow products to come to market quicker and with more expansive capabilities," Morris said.

However, fears that Trump's apparent go-ahead signal to AI vendors to essentially do what they want are unfounded, he said. Meanwhile, Biden's order was impractical to implement and "had yet to show any real teeth," Morris said.

"What we've got now is a situation that's going to allow the AI firms to develop their technology," he said. "As the application of that technology starts to take more and more effect, then we'll start to see; I'm sure regulation will come into certain aspects.'"

AI arms race

An important facet of Trump's emerging policy on AI is pursuing competition with China, the chief rival in developing powerful AI systems, to ensure U.S. supremacy.

For its part, China does not appear concerned about AI safety and is moving forward as fast as possible, said Avner Braverman, founder and CEO of AI cinematic video startup VOIA.

"We need to run faster," he continued. "If government tries to regulate this, it's only going to lose the race. We're going to lose the race, and we're going to lose the next front of innovation."

To win the competition, AI innovators must work with industries and customers to create AI technology that is effective for the market, Braverman said.

"This is where the market would innovate itself and will push us into real solutions much more effectively than any government-guided [legislation]," he said, "The market will set the limits to how much safety is required."

The most effective role for government in the AI arena is supporting private-industry projects like Stargate, the $500 billion data center and AI infrastructure collaboration among OpenAI, Oracle, Softbank and other investors, Braverman said.

Some say lack of regulation hinders innovation

Letting AI vendors, enterprises and the market self-regulate is the wrong approach, said Davi Ottenheimer, vice president of trust and digital ethics at private data storage firm Inrupt.

"What this does is that it allows people to self-define what is good, and that's lowering the bar, and if you lower the bar far enough, you make a huge mess with a lot of money that other people clean up," Ottenheimer said.

Mia Shah-Dand, founder of Women in AI Ethics, said a lack of stringent regulation will not lead to faster adoption of the latest AI technologies, even if it leads to more innovation.

While AI vendors can build the most advanced technology, enterprises still have a responsibility to ensure what they offer consumers is grounded in safety and values, Shah-Dand said.

"Typically, companies invest in morality and responsible ethical development," she said. This means organizations are still responsible if the technology they use becomes harmful, especially in industries like healthcare. For example, if the AI system leads to bias in healthcare, causing harm to the patient, the organization using it could still be responsible and sued.

Therefore, vendors pursue technology advances without enough attention to risks, and enterprises will have to perform their own due diligence on safety -- leading to slower uptake of AI, Shah-Dand argued. Organizations don't want to put themselves at legal risk from potential harm caused by malfunctioning AI systems, so they must hold themselves accountable, regardless of who is in office, she said.

"We are doing more user training, doing more comprehensive, aggressive vetting of technologies, so where the burden was with the tech companies, now that burden has been shifted to users of those technologies," Shah-Dand added.

However, enterprises could relieve themselves of responsibility if the Trump administration creates policies that affect constitutional rights and changes laws affecting equality and gender, racial or other bias.

"If all of the consumer protection were to vanish, that's a bigger issue than just one technology," Shah-Dand continued. "That's a truly terrifying prospect for us going forward."

And that could put consumer and end-user trust in danger, she said.

An approach that lasts

To keep the trust, organizations should pursue a strategy of ensuring that safety and responsibility standards remain in place, regardless of who's in office, said Kate O'Neill, author, founder and CEO of KO Insights, an advisory firm.

O'Neill said companies now will look to create technology that meets responsible standards.

"Companies need to figure out what they already needed to have figured out, which is how do they want to go about this in an approach that feels like it resonates with them, to their values … that can withstand regulatory changes," O'Neill said.

Instead of waiting for the regulation to be enacted at the federal or state level, AI vendors and enterprises that use AI technology should take matters into their own hands, she said.

"The opportunity is to use this as a more introspective moment," she continued. "They're going to need to figure out what kind of robust, internal AI governance they're going to want to have that has nothing to do with regulatory changes because this is going to be a moving target for the companies."

But supporters of Trump's full-speed ahead policy on AI, with minor emphasis on safeguards, see no reason to be fearful.

Without those constraints, AI vendors will push AI technology forward to new heights and breakthroughs not seen before, said Wang, of Constellation Research.

"They're going to put as much money as they can to look at longevity, to cure human disease, and people are willing to make that bet, because there's going to be very light guardrails," he said.

Esther Shittu is an Informa TechTarget news writer and podcast host covering artificial intelligence software and systems.

Dig Deeper on AI business strategies