Author: Mathew Ingram / Source: VentureBeat Above: Speaking at the Asia-Pacific Economic Cooperation CEO Summit on November 19, 2016, Fac
Above: Speaking at the Asia-Pacific Economic Cooperation CEO Summit on November 19, 2016, Facebook cofounder and CEO Mark Zuckerberg vowed , “We also need to do our part to stop the spread of hate and violence and misinformation.”
After acknowledging that it has a problem with fake news, Facebook introduced a feature recently that flags certain posts as “disputed.” In some cases, however, this appears to be having the opposite effect to the one Facebook intended.
According to a report by The Guardian, the tagging of fake news is not consistent, and some stories that have been flagged continue to circulate without a warning. In other cases, traffic to fake news posts actually increased after Facebook applied the warning.
Facebook started rolling out the new feature last month, as part of a partnership with a group of external fact-checking sites, including Snopes.com, ABC News, and Politifact.
When a user tries to share links that have been marked as questionable, an alert pops up that says the story in question has been disputed. The alert links to more information about the fact-checking feature and says that “sometimes people share fake news without knowing it.”
If the user continues to share the link or story anyway, the link is supposed to appear in the news-feeds of other users with a large note that says “disputed,” and lists the organizations that flagged it as fake or questionable.
The idea behind the effort was to try to decrease the visibility of hoaxes and fake news, which many Facebook critics believe are spread rapidly by the site’s news-feed algorithm.
In a number of cases, however, the Guardian said it appears that the fake-news warning is either being applied too late—after a story…