Fotolia

After fine, YouTube AI to help flag content for children

YouTube AI will help flag videos that contain content for children following a settlement with the FTC and the New York attorney general over mishandling of children's data.

YouTube said it will use machine learning to help identify videos that contain content for children after regulators alleged the video-streaming company illegally collected data from children using the platform and sold that data to advertisers, even as some experts are skeptical that YouTube will no longer collect and process children's data.

The Google-owned company has long used machine learning to help flag and remove videos and comments that violate its community guidelines, with mixed results. YouTube AI also helps drive its powerful recommendation system.

FTC settlement

Increased regulatory efforts to identify content aimed at children come after Google and YouTube were fined $170 million by the Federal Trade Commission (FTC) and the New York attorney general after regulators found YouTube illegally collected personal data from children without their parents' consent. Regulators say that YouTube then used that data to deliver targeted ads to children, potentially earning YouTube millions of dollars.

The company's alleged actions violated the federal Children's Online Privacy Protection Act, regulators claimed. The months-long investigation came to a close Sept. 4, when the FTC publicly released the details of the settlement.

The YouTube FTC case alludes to a much bigger problem.
Alan Pelz-SharpeFounder, Deep Analysis

"The YouTube FTC case alludes to a much bigger problem," said Alan Pelz-Sharpe, founder of market advisory and research firm Deep Analysis, based in Groton, Mass. "Cloud data through SaaS applications is regularly mined without consent," he said. Vendor guidelines are typically opaque and occasionally deliberately misleading, Pelz-Sharpe said.

This case with YouTube crosses a line, as it involves children, he said.

"But, this happens all the time and we will seldom ever know it is happening. Maybe it's not skullduggery, but it is willfully ignorant," Pelz-Sharpe said. "They probably won't sell your private data to a third party, but they will do pretty much anything else they want with it as long as that remains behind closed doors."

The settlement requires Google and YouTube to pay $136 million to the FTC and $34 million to New York.

The bigger issue

YouTube CEO Susan Wojcicki said in a Sept. 4 blog post that YouTube will in about four months stop offering personalized ads on content aimed at children, and eliminate some features for that type of content, including comments and notification.

YouTube, YouTube AI
: Following a settlement with the FTC, YouTube will use AI to help identify content aimed at children

YouTube will also "limit data collection and use on videos made for kids only to what is needed to support the operation of the service," she said. However, it's unclear how much and what kind of data YouTube will still collect.

"Clearly these firms need your data to run the application, they need to mine the data to optimize the application and learn how to improve it," Pelz-Sharpe noted. "But truly anonymizing that data is hard, if not near impossible, without making it useless."

Content creators will have to flag their content if it targets children, and YouTube AI and machine learning will now help automatically flag videos that "clearly target young audiences, for example those that have an emphasis on kids' characters, themes, toys, or games," Wojcicki said.

YouTube and Google did not respond to requests for comment for this story.

AI troubles

YouTube's AI and machine learning technologies have historically had trouble flagging inappropriate content, so it is unclear how well they will work in identifying content for children.

In a recently released YouTube Community Guidelines enforcement report, YouTube noted that, as of June 2019, a total of 4,069,349 videos were removed for violating community guidelines. More than a million of those removals were between April and June 2019.

While those numbers appear impressive, YouTube AI has been known to make mistakes, flagging videos as inappropriate when they aren't, or leaving videos that violate guidelines on the platform.

The New York Times in June first revealed the platform's propensity to recommend videos of partially clothed young children after users watched videos with sexual content. Some of those videos included links to the children's social media accounts.

YouTube made several changes, including tweaks to the recommendation system and -- in some cases -- limiting it, after the story in The New York Times story but refused to remove the recommendation system on children's videos entirely, citing the system as a major traffic driver.

Next Steps

How YouTube's new AI-powered product aids advertisers

Dig Deeper on AI technologies