Getty Images

As lawsuits pile up, Section 230 shields digital platforms

Section 230 immunity and First Amendment rights create a staggering challenge for legal teams looking to hold social media platforms accountable for alleged harms.

Lawsuits targeting social media platforms' use of recommendation algorithms and emerging AI tools are facing an increasingly difficult legal landscape as tech companies wield protections like Section 230 and the federal government slashes research funding.

Section 230 of the Communications Decency Act shields online digital platforms from liability for third-party content. It applies to social media platforms such as X, Snapchat, TikTok and Meta, owner of Facebook and Instagram, which offer public feeds where third parties share articles, photos, videos and other content. Still, multiple lawsuits filed against the companies raise concerns about harms litigants claim are caused by social media platforms' tools, including recommendation algorithms and generative AI chatbots.

In addition, President Donald Trump's cuts to research funding and grants, including at institutions such as Johns Hopkins University and Columbia University, hinder scientists and researchers hoping to study the impact of social media platforms on teenagers and older consumers, said S. Bryn Austin, a professor at the Harvard T.H. Chan School of Public Health. Austin spoke during an event hosted on Tuesday by the Knight-Georgetown Institute (KGI) and Georgetown University’s Institute for Technology Law and Policy.

"This is going to make it much more difficult for external scientists to be able to carry out the research that we need and the monitoring that we need to be able to keep track of what's happening in social media and to make sure they are made safer places," she said.

Tech companies claim they already offer protective measures for young people online but have also been outspoken about not interfering with free speech with content moderation. Meta, for example, outlines multiple tools and features it already uses to protect teenagers on its platform. TikTok includes a parental guide and limits mature content for users under the age of 18.

Section 230 shields social media platforms

State attorneys general across the U.S. have sued social media platforms, citing harms to teens and consumers.  

New Mexico Attorney General Raúl Torrez, for example, sued Snap, Inc., owner of Snapchat, alleging the platform's recommendation algorithm enabled child sexual exploitation. New York Attorney General Letitia James and California Attorney General Rob Bonta led 14 state AGs in a lawsuit against TikTok for "misleading the public about the safety of its platform and harming young people's mental health." Meanwhile, multiple state AGs have filed lawsuits against Meta over the last year claiming the company violated consumer protection laws by using dark patterns to manipulate its users.

Consumers are also challenging social media platforms and AI companies. The Social Media Victims Law Center is leading lawsuits against Character.ai, an AI chatbot platform that enables users to create and interact with AI-generated characters. The law center represents Megan Garcia, who sued Character.ai in October 2024 following the death of her 14-year-old son, Sewell Setzer III. Garcia claims the company is responsible for her son's death after he engaged in conversation with the chatbot before taking his own life.

State legislatures are also attempting to regulate social media platforms using data privacy laws or introducing new bills specifically targeting children's' safety online. Utah lawmakers passed the App Store Accountability Act earlier this year, requiring app store operators to verify users' age and acquire parental consent. 

As consumers and state AGs look for ways to hold social media companies accountable for designing better online platforms or protecting children online, industries are pushing back, pointing to either Section 230 or claiming First Amendment protections.

NetChoice, a trade association that represents Meta, Amazon, Google, Netflix and other tech companies, has challenged state legislation. NetChoice was granted a preliminary injunction against the California Age-Appropriate Design Code Act, which would have implemented measures to protect teenagers online.

NetChoice claimed the law violated its members' First Amendment rights because it was focused on content. The ruling reaffirmed that the "government cannot control what lawful speech Americans see, say, or share online," Chris Marchese, the group's director of litigation, said in a release.

Several lawsuits against social media companies have also been dismissed on Section 230 grounds. Section 230 does protect companies from third-party content liability, but it was never meant to protect companies from platform design problems and content created by the companies themselves, said Meetali Jain, founder of the Tech Justice Law Project and a speaker during the event.

"The original understanding of 230 was never meant to extend that far, but courts around the country have extended this 230 immunity shield to instances in which there have been evidences of real harm that is not attributable to third-party content," she said.

Jain said the conversation is slowly starting to shift as policymakers and state AGs focus on problematic features of social media platforms being design-based and thus content-agnostic. That could eventually change the narrative about Section 230 protections, she said.

"The point of intervention is upstream," she said. "You're intervening at the point of how are these things being designed as opposed to downstream in terms of what content are users seeing."

More research needed to track harms

As legal teams and judges assume a central role in social media regulation in the U.S., which Jain said has resulted in inconsistent rulings between courts over the years, it's important for researchers to clearly outline social media's impact on users.

To do that, external scientists need access to internal data from social media companies -- something not often easily accessible, said Peter Chapman, KGI associate director, during the event on Tuesday.

"Data access is essential to actually assess and establish potential harms and benefits from social media, and it's also critical to identify effective strategies to prevent or mitigate harms," Chapman said.

Harvard's Austin said social media platforms are not compelled to share product data with external research teams, which leaves researchers with limited data to work with when tracking and understanding social media harms. Federal funding cuts for academic research will only worsen the problem, Austin said.

"We need to be able to identify and measure specific potential harms so they can be monitored," she said. "Researchers are constrained by barriers to accessing data and the secrecy of these platforms."

Makenzie Holland is a senior news writer covering big tech and federal regulation. Prior to joining Informa TechTarget, she was a general assignment reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Dig Deeper on CIO strategy