YouTube faces brand freeze over ads and obscene comments on videos of kids

YouTube faces brand freeze over ads and obscene comments on videos of kids

On Friday investigations by the BBC and The Times reported finding obscene comments on videos of children uploaded to YouTube. The comments and their associated accounts were only removed after the BBC contacted YouTube via press channels, it said. Responding to the issues being raised a YouTube spokesperson said it’s working on an urgent fix — and told us that ads should not have been running alongside this type of content. We are investigating this matter to determine what was behind the appearance of this autocompletion.” Earlier this year scores of brands pulled advertising from YouTube over concerns ads were being displayed alongside offensive and extremist content, including ISIS propaganda and anti-semitic hate speech. And last week YouTube announced another tightening of the rules around content aimed at children — including saying it would beef up comment moderation on videos aimed at kids, and that videos found to have inappropriate comments about children would have comments turned off altogether. The BBC said the problem with YouTube’s comment moderation system failing to remove obscene comments targeting children was brought to its attention by volunteer moderators participating in YouTube’s (unpaid) Trusted Flagger program. Over a period of “several weeks” it said that five of the 28 obscene comments it had found and reported via YouTube’s ‘flag for review’ system were deleted. The BBC also reported criticism directed at YouTube by members of its Trusted Flaggers program, saying they don’t feel adequately supported and arguing the company could be doing much more. But for example, we can’t prevent predators from creating another account and have no indication when they do so we can take action.” Google does not disclose exactly how many people it employs to review content — reporting only that “thousands” of people at Google and YouTube are involved in reviewing and taking action on content and comments identified by its systems or flagged by user reports. But while tech companies have been quick to try to use AI engineering solution to fix content moderation, Facebook CEO Mark Zuckerberg himself has said that context remains a hard problem for AI to solve.

Facebook: Now for young children too
New Survey Finds Adult Children Want Their Parents To Age At Home
New Survey Finds Adult Children Want Their Parents To Age At Home
  • Facebook
  • Twitter
  • Google+
  • Buffer
  • Pinterest
  • LinkedIn
YouTube is firefighting another child safety content moderation scandal which has led several major brands to suspend advertising on its platform.

On Friday investigations by the BBC and The Times reported finding obscene comments on videos of children uploaded to YouTube.

Only a small minority of the comments were removed after being flagged to the company via YouTube’s ‘report content’ system. The comments and their associated accounts were only removed after the BBC contacted YouTube via press channels, it said.

While The Times reported finding adverts by major brands being also shown alongside videos depicting children in various states of undress and accompanied by obscene comments.

Brands freezing their YouTube advertising over the issue include Adidas, Deutsche Bank, Mars, Cadburys and Lidl, according to The Guardian.

Responding to the issues being raised a YouTube spokesperson said it’s working on an urgent fix — and told us that ads should not have been running alongside this type of content.

“There shouldn’t be any ads running on this content and we are working urgently to fix this. Over the past year, we have been working to ensure that YouTube is a safe place for brands. While we have made significant changes in product, policy, enforcement and controls, we will continue to improve,” said the spokesperson.

Also today, BuzzFeed reported that a pedophilic autofill search suggestion was appearing on YouTube over the weekend if the phrase “how to have” was typed into the search box.

On this, the YouTube spokesperson added: “Earlier today our teams were alerted to this profoundly disturbing autocomplete result and we worked to quickly remove it as soon as we were made aware. We are investigating this matter to determine what was behind the appearance of this autocompletion.”

Earlier this year scores of brands pulled advertising from YouTube over concerns ads were being displayed alongside offensive and extremist content, including ISIS propaganda and anti-semitic hate speech.

Google responded by beefing up YouTube’s ad policies and enforcement efforts, and by giving advertisers new controls that it said would make it easier for brands to exclude “higher risk content and fine-tune where they want their ads to appear”.

In the summer it also made another change in response to content criticism — announcing it was removing the ability for makers of “hateful” content to monetize via its baked in ad network, pulling ads from being displayed alongside content…

Pin It on Pinterest

Share This