New policy to tackle content that could fuel violence may be well-meaning, but the complexity of the task is mind-boggling
© Getty Images Facebook faces a particular challenge in WhatsApp, on which dangerous rumors are spread through encrypted messages. |
Facebook has been grappling with its role in spreading false news and disinformation for a few years, but a spate of mob violence in India, Sri Lanka and Myanmar have spurred the social network into a knee-jerk policy change.
[post_ads]Until now, Facebook has dealt with disinformation by making it less prominent in people’s news feeds. This week, the company announced it would start to delete inaccurate or misleading information created or shared “with the purpose of contributing to or exacerbating violence or physical harm”.
On the face of it, it seems like a reasonable and well-intentioned policy. However, the lightest interrogation reveals a mind-bogglingly complex and thankless task.
In addition, any successes will be undermined by the fact that much of the inflammatory misinformation in South Asia is being spread through Facebook’s sister platform WhatsApp, where encryption makes content moderation impossible.
The policy change will first be implemented in Sri Lanka, where pernicious falsehoods on the platform, such as the allegation that Muslims were putting sterilisation pills into food intended for the country’s Sinhalese majority, have stoked riots, beatings and the destruction of mosques and Muslim-owned businesses. The Sri Lankan government temporarily blocked Facebook services in March in an effort to defuse the situation.
Facebook said it was working with local civil society groups to identify which content might contribute to physical harm. Once the company has verified that information is false and could be a contributing factor to “imminent” violence or harm to physical safety, Facebook will take it down.
Last month, the company said, it removed content that falsely claimed Muslims were poisoning food. The company would not reveal the exact content it had removed, nor the names of the civil society groups it was working with. A representative from the Centre for Policy Alternatives, one of the more vocal civil society groups in Sri Lanka, said: “This is not something we were told about.”
[post_ads]Until now, Facebook has dealt with disinformation by making it less prominent in people’s news feeds. This week, the company announced it would start to delete inaccurate or misleading information created or shared “with the purpose of contributing to or exacerbating violence or physical harm”.
On the face of it, it seems like a reasonable and well-intentioned policy. However, the lightest interrogation reveals a mind-bogglingly complex and thankless task.
In addition, any successes will be undermined by the fact that much of the inflammatory misinformation in South Asia is being spread through Facebook’s sister platform WhatsApp, where encryption makes content moderation impossible.
The policy change will first be implemented in Sri Lanka, where pernicious falsehoods on the platform, such as the allegation that Muslims were putting sterilisation pills into food intended for the country’s Sinhalese majority, have stoked riots, beatings and the destruction of mosques and Muslim-owned businesses. The Sri Lankan government temporarily blocked Facebook services in March in an effort to defuse the situation.
Facebook said it was working with local civil society groups to identify which content might contribute to physical harm. Once the company has verified that information is false and could be a contributing factor to “imminent” violence or harm to physical safety, Facebook will take it down.
Even if Facebook cracks misinformation on its main platform, it has a trickier problem on its hands with WhatsApp
The policy announcement appears to have been rushed out to provide some “news” for dozens of non-US journalists whom Facebook had flown in from Europe, Asia and Latin America for a day-long media event at the company’s Menlo Park headquarters on Wednesday.
This could explain why Facebook representatives were not able to answer questions about the specifics of the policy, which the company plans to roll out over the coming months. What is the threshold for violence? A punch? Arson? A lynching? Will it retroactively delete hoaxes like Pizzagate that bubble up for months before someone shoots a gun in a pizza parlour? Will Facebook defer to civil society groups on all sides? If so, what will it do if there is no consensus?
Beyond the practicalities of implementing the policy, one of the most glaring challenges appears to be how Facebook will know if its actions are really helping to mitigate violence.
“You will never know the harm you prevent,” said Joan Donovan from Data and Society. “It’s an immeasurable win and impossible to evaluate.”
Still, even in its messy form, the update has been welcomed by those who have grown frustrated by Facebook’s inaction in the name of freedom of expression.
“This is a positive step forward,” said Claire Wardle, a research fellow at the Shorenstein Center on Media, Politics and Public Policy at Harvard who specialises in the spread of misinformation. “It’s been a wake-up call in the last six months to see how rumours are escalating into real-world violence.”
[post_ads_2]
“It doesn’t mean it’s going to be easy. It’s massively complex and I hope they work very closely with local civil society groups and hire moderation staff of people who speak local languages,” she added.
While civil society groups can identify content that might incite violence, they don’t have access to Facebook’s social graph.
“Facebook knows the ways this misinformation travels,” said Donovan. “It knows there are hubs and spokes. It needs to use its own data and invest in those groups that can help them understand the context of their data to spot actual manipulation and thwart these accounts.”
Even if Facebook cracks misinformation on its main platform, it has a trickier problem on its hands with WhatsApp, where much of the most dangerous rumour-mongering in South Asia takes place. WhatsApp messages have end-to-end encryption, which means that Facebook cannot see or moderate their content.
In places like Sri Lanka and India, people use the messaging app differently from the typical American. Users will join groups with more than 100 participants (WhatsApp caps groups at 256 members), used to broadcast and discuss local issues – and disinformation.
“The networks are like a honeycomb, with political operatives with 10-20 phones networked into all different groups. They will start a fire in each of those groups and then other political operatives will forward the message into other groups,” said Donovan.
“Because the switch isn’t algorithmic but human, it means that Facebook in effect is trying to police human behaviour,” she said.
Even if Facebook cracks misinformation on its main platform, it has a trickier problem on its hands with WhatsApp, where much of the most dangerous rumour-mongering in South Asia takes place. WhatsApp messages have end-to-end encryption, which means that Facebook cannot see or moderate their content.
In places like Sri Lanka and India, people use the messaging app differently from the typical American. Users will join groups with more than 100 participants (WhatsApp caps groups at 256 members), used to broadcast and discuss local issues – and disinformation.
“The networks are like a honeycomb, with political operatives with 10-20 phones networked into all different groups. They will start a fire in each of those groups and then other political operatives will forward the message into other groups,” said Donovan.
“Because the switch isn’t algorithmic but human, it means that Facebook in effect is trying to police human behaviour,” she said.
Pankaj Jain, who runs Indian verification site SM Hoax Slayer dedicated to debunking fake news online, said that WhatsApp was “obviously” worse than Facebook for spreading misinformation. He said this was partly because the messaging app was so easy to use and had the widest reach among rural communities, and partly because data charges were so low.
His third reason relates to a fundamental privacy feature within the app. “People who create and spread fake news are aware they can’t be tracked so WhatsApp is their first choice for fake news.”
Harssh Poddar, a senior police official in the city of Malegaon in the Indian state of Maharashtra who has dealt with mob violence triggered by unfounded fear over child kidnappings, agreed. But he noted that the new Facebook policy could be helpful at eliminating some of the source videos and memes that get shared through WhatsApp, nipping them in the bud.
“It would make a difference to the extent that it might quell some of the sources from where these doctored videos or fake news are spreading,” Poddar said.
Poddar, who has been running his own media literacy training in Malegaon, would like Facebook to be more responsive to requests from local law enforcement to help identify habitual offenders on WhatsApp.
The WhatsApp spokesman Carl Woog, speaking at the Facebook media event, described the violence in Sri Lanka and India as “horrific”.
Related: The Guardian view on controlling social media: the start of a long road | Editorial
“It’s been really terrible to watch and our hearts have been broken by what we’ve seen,” he said.
[post_ads_2]Related: The Guardian view on controlling social media: the start of a long road | Editorial
“It’s been really terrible to watch and our hearts have been broken by what we’ve seen,” he said.
The company has been running ads in Indian media and working with civil society groups to run training sessions on misinformation and digital literacy. The app has also introduced a new label for forwarded messages which highlights that an item of content has not been originally composed by the sender.
“We take this quite seriously,” said Woog. “But there are limitations.”
Without access to the content of WhatsApp messages, Facebook must focus on the metadata, including phone identifiers, IP addresses and how messages flow between members of different groups, and who are the influencers.
“We’re going to see a lot of moves [from Facebook],” Wardle said, “as they have realised WhatsApp is their Achilles heel.”
COMMENTS