Battling misinformation focus of Facebook, Twitter execs' talk
Facebook and Twitter executives say finding a pattern of malicious activity is more effective in identifying bad actors on sites than looking at the misleading information itself.
SAN FRANCISCO -- Pictures, video and words on Facebook and Twitter that are meant to influence elections or advance a political agenda often grab headlines. But when it comes to identifying the creators of misinformation, establishing their behavior pattern is critical, the social media sites said.
In a panel discussion at the RSA Conference this week, executives from Facebook and Twitter discussed how the sites battle hate groups and foreign agents that use the sites to spread misinformation. The keynote event at the security conference came the same day Facebook disclosed it would make the platform less useful to abusers by shifting people toward private conversations within smaller groups.
Critics and government officials have condemned the social networks for prioritizing their business models over a more aggressive approach toward finding and banishing bad actors from the sites. Facebook and Twitter have argued that the problem of removing malicious activity is difficult without infringing on legitimate public discussion of controversial issues.
"That's an incredibly hard balance," Nathaniel Gleicher, the head of cybersecurity policy at Facebook, told RSA attendees.
Behavior is key to cleaning up misinformation on social networks
While removing content is important in stopping hate speech, it didn't help identify and remove malicious users of the sites, said Gleicher and Del Harvey, vice president of user safety and security at Twitter.
Del Harveyvice president of user safety and security, Twitter
"We often have more success in looking for certain patterns of behavior or certain attack styles than looking at the content itself," Harvey said.
The malicious activity Twitter looks for includes groups of accounts pushing the same types of information while sharing other similarities, such as operating from the same IP networks or using the same carriers. Those indicators are useful identifiers when used with activity data that watchdog organizations collect.
Once Facebook builds a bad-actor profile, the site can add it to the network's algorithms, which drive an automated approach toward removing large numbers of malicious groups.
"There's a lot of focus on scale solutions," Gleicher said of Facebook, which has more than 2 billion active users worldwide.
Anonymous posting on social media
Gleicher and Harvey also addressed the issue of allowing users to post content anonymously, a practice that can lead to a proliferation of fake accounts and misinformation. The executives, along with panelist Rob Joyce, a senior advisor for the National Security Agency, agreed anonymity is necessary to protect social media participants who live in countries with repressive governments.
Rather than force people to use their real identities, Facebook was more focused on establishing whether they are who they say they are, such as dissidents or social activists from a particular country. "We think a lot about authenticity, not anonymity," Gleicher said.
Representatives of Facebook and Twitter have testified before congressional committees considering regulations to prevent the sites from misusing the personal data they collect from users. Investigations by The New York Times and other news outlets have found Facebook sharing information with third parties unbeknownst to users of the platform.
Gleicher said the site was not against requiring more transparency on who gets access to Facebook data. For her part, Harvey said Twitter favored "meaningful transparency," which she defined as disclosing information with context, so it's useful to recipients whether they're users, researchers or government agencies.