Sergey Nivens - Fotolia

Social media algorithms under Senate microscope

Social media algorithms, like those from Facebook, Twitter and YouTube, have created economies and sowed misinformation. Senators want to know how they should be legislated.

In a fact-finding mission on how social media algorithms work, U.S. senators grilled content policy leaders from Facebook, Twitter and YouTube Tuesday.

During a hearing held by the Senate Judiciary Committee's Subcommittee on Privacy, Technology and the Law, senators questioned the executives on how their content-ranking algorithms work, how they control the spread of misinformation and violent rhetoric on their platforms and how their business models work. The executives took turns explaining their algorithms, as well as efforts the companies have made to be more transparent to consumers about how what they see is the result of algorithmic content-ranking.

During the hearing on "Algorithms and amplification: How social media platforms' design choices shape our discourse and our minds," U.S. senators expressed concerns about the way social media algorithms that take into account user interests can often proffer skewed and inaccurate content such as misinformation about COVID-19 or the presidential election, further dividing communities and inciting unrest. The hearing comes at a time when the federal government has increased its scrutiny over how technology and social media companies do business, putting everything from data practices to acquisition of competitors under the microscope.

"We need social media companies to finally take real action to address the abuse and misuse of their platforms and the role that algorithms play in amplifying it," said Sen. Dick Durbin (D-Illinois).

Social media algorithms and amplification

When users log onto a social media platform like Twitter, Facebook or YouTube, they're often seeing content curated specifically for them and generated by algorithms that rely on user data such as what kind of posts a user has engaged with and what content format the user preferred.

We need social media companies to finally take real action to address the abuse and misuse of their platforms and the role that algorithms play in amplifying it.
Sen. Dick Durbin (D-Illinois)

But that practice can put users inside an echo chamber where new information reflects old information and, in some cases, amplifies the spread of misinformation, a topic at the heart of Tuesday's hearing.

Facebook, Twitter and YouTube policy leaders testified that their companies have undertaken efforts to flag misinformation and take down content that violates community guidelines such as violent rhetoric. But the problem persists, which was evident in January when groups organized on social media and stormed the U.S. Capitol following election results and, more recently, by the generation of misinformation regarding topics like COVID-19 vaccines.

Facebook and Twitter policy leaders noted that users can opt out of algorithmic content ranking, choosing to instead view the most recent news and content.

Technical smoke and mirrors

Although the three companies claim to use algorithms to reduce the spread of misinformation and harmful content, Tristan Harris, co-founder and president of the Center for Humane Technology and a former design ethicist at Google, said during the hearing the efforts are often smoke and mirrors.

"While you're hearing from the folks here today about the dramatic reductions in harmful content, borderline content, hiring 10,000 more content moderators -- it can sound convincing," he said. "But, at the end of the day, a business model that preys on human attention means we are worth more as human beings and as citizens of this country when we are addicted, outraged, polarized, narcissistic and disinformed because that means the business model was successful at steering our attention using automation."

Joan Donovan, research director at the Shorenstein Center on Media, Politics, and Public Policy and an adjunct lecturer in public policy at the John F. Kennedy School of Government at Harvard University, said misinformation at scale is a "feature of social media."

Donovan said there are four aspects of social media algorithm design that can send a user down a "rabbit hole" of misinformation: repetition of content as a result of likes and shares; redundancy of content across different products to make it feel that something is more true; responsiveness, in that social media companies always provide an answer to a query whether it's a right or wrong answer; and remembering and reinforcing the keywords a user has searched.

"If you search for contentious content like Rittenhouse, QAnon, Proud Boys or Antifa, you're likely to enter a rabbit hole where extracting yourself from reinforcement algorithms ranges from the difficult to the impossible," she said. "The rabbit hole is best understood as an algorithmic economy where algorithms pattern the distribution of content in order to maximize growth, engagement and revenue."

Striking a delicate balance

Donavan said tackling a problem this big will likely require federal oversight.

Sen. Chris Coons (D-Delaware), chairman of the subcommittee, questioned the "underlying incentives" for all three platforms, asking the policy leaders if their companies provided pay incentives to algorithm teams based on engagement and growth-related metrics. All three denied that algorithm teams were incentivized to increase users' time on the sites.

At the conclusion of the hearing, Coons said the federal government has a hefty task going forward of striking a delicate balance between potential legislation and fostering innovation -- a balance he said will require more conversation.

"None of us wants to live in a society that, as a price of remaining open and free, is hopelessly, politically divided or where our kids are hooked on their phones and being delivered a torrent of reprehensible material," he said. "But I also am conscious of the fact that we don't want to needlessly constrain some of the most innovative, fastest growing businesses in the west."

Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget, she was a general reporter for the Wilmington Star-News and a crime and education reporter at the Wabash Plain Dealer.

Dig Deeper on CIO strategy