Author: Natasha Lomas / Source: TechCrunch Social media giants Facebook, YouTube and Twitter have once again been accused of taking a “la
Social media giants Facebook, YouTube and Twitter have once again been accused of taking a “laissez-faire approach” to moderating hate speech content on their platforms.
This follows a stepping up of political rhetoric against social platforms in recent months in the UK, following a terror attack in London in March — after which Home Secretary Amber Rudd called for tech firms to do more to help block the spread of terrorist content online.
In a highly critical report looking at the spread of hate, abuse and extremism on Facebook, YouTube and Twitter, a UK parliamentary committee has suggested the government looks at imposing fines on social media forms for content moderation failures.
It’s also calling for a review of existing legislation to ensure clarity about how the law applies in this area.
“Social media companies currently face almost no penalties for failing to remove illegal content. There are too many examples of social media companies being made aware of illegal material yet failing to remove it, or to do so in a timely way. We recommend that the government consult on a system of escalating sanctions to include meaningful fines for social media companies which fail to remove illegal content within a strict timeframe,” the committee writes in the report.
Last month, the German government backed a draft law which includes proposals to fine social media firms up to €50 million if they fail to remove illegal hate speech within 24 hours after a complaint is made.
A Europe Union-wide Code of Conduct on swiftly removing hate speech, which was agreed between the Commission and social media giants a year ago, does not include any financial penalties for failure — but there are signs some European governments are becoming convinced of the need to legislate to force social media companies to improve their content moderation practices.
The UK Home Affairs committee report describes it as “shockingly easy” to find examples of material intended to stir up hatred against ethnic minorities on all three of the social media platforms it looked at for the report.
It urges social media companies to introduce “clear and well-funded arrangements for proactively identifying and removing illegal content — particularly dangerous terrorist content or material related to online child abuse”, calling for similar co-operation and investment to combat extremist content as the tech giants have already put into collaborating to tackle the spread of child abuse imagery online.
The committee’s investigation, which started in July last year following the murder of a UK MP by a far right extremist, was intended to be more wide-ranging. However, because the work was cut short by the UK government calling an early general election the committee says it has published specific findings on how social media companies are addressing hate crime and illegal content online — having taken evidence for this from Facebook, Google and Twitter.
“It is very clear to us from the evidence we have received that nowhere near enough is being done. The biggest and richest social media companies are shamefully far from taking sufficient action to tackle illegal and dangerous content, to implement proper community standards or to keep their users safe. Given their immense size, resources and global reach, it is completely irresponsible of them to fail to abide by the law, and to keep their users and others safe,” it writes.
“If social media companies are capable of using technology immediately to remove material that breaches copyright, they should be capable of using similar content to stop extremists re-posting or sharing illegal material under a different name. We believe that the government should now assess whether the continued publication of illegal material and…