The social network’s algorithms scan billions of posts each day in hopes of flagging misinformation before it goes viral; humans play a supporting role, but in the end they simply can’t keep up.
At Facebook Inc. headquarters in Silicon Valley this week, engineers and researchers huddled around computers in a newly configured “war room” to fight misinformation ahead of the midterms. Almost 3,000 miles away, in Philadelphia, the fact-checkers hired to be on the front lines haven’t received fresh marching orders.
The disconnect highlights how Facebook’s efforts to combat fake news are playing out differently this election cycle than many expected. Although the company has touted its partnerships with organizations including Factcheck.org in Philadelphia that provide human fact-checkers to vet possibly phony posts, those groups are playing a limited role.
The vast majority of Facebook’s efforts against fake news are powered by artificial intelligence, not humans.
Factcheck.org is one of five domestic groups hired by Facebook to deploy human fact-checkers to help prevent a repeat of 2016, when the social-media giant’s platform was flooded with misinformation aimed at sowing divisions ahead of the presidential election.
On one recent morning, a Factcheck reporter reviewed a dubious Facebook post in which Democratic Rep. Nancy Pelosi purportedly praised President Trump’s tax cuts, but other staffers busied themselves with workaday tasks such as vetting traditional political advertisements and reviewing the public statements of elected officials.
Out of Factcheck’s full-time staff of eight people, two focus specifically on Facebook. On average, they debunk less than one Facebook post a day. Some of the other third-party groups reported similar volumes. None of the organizations said they had received special instructions from Facebook ahead of the midterms, or perceived a sense of heightened urgency.
ABC News, which was part of the fact-checking effort when it began early last year, has dropped out. “We did a review, and we couldn’t tell if it was really making any difference; so we decided to reallocate the resources,” said a person familiar with ABC’s decision.
Facebook says fact-checkers were always expected to play a supporting role, and the reality is that humans can’t move quickly enough to identify and act on misinformation before it goes viral on a platform the scale of Facebook’s, with billions of posts produced each day.
“Fact-checking has taken up a disproportionate amount of the conversation” around fake news, said Tessa Lyons, product manager at Facebook focused on the integrity of information in the news feed.
The most important function of human fact-checkers is to contribute to Facebook’s understanding of the sites that share false news and provide feedback that helps machine learning become more effective, Ms. Lyons said.
Facebook’s war room, which became operational in September ahead of elections in Brazil, is staffed by employees rather than outside fact-checkers, although the company said it would include the outsiders in thorny decisions. One morning this week, several Facebook employees were tracking content across Facebook, WhatsApp and Instagram, as well as national and international news. On one wall was a large American flag; clustered in a corner were motivational posters with slogans like “Focus on Impact” and “Be the Nerd.”
Facebook spends billions annually to improve its artificial intelligence for a range of tasks, including content moderation. It paid Factcheck $189,000 in the fiscal year ended in June, according to public documents.
Earlier this year, Facebook also recruited the Associated Press to do fact-checking in all 50 states ahead of the midterms, a spokeswoman for Facebook said.
The other groups involved since the start of the fact-checking effort are PolitiFact and Snopes. After ABC News dropped out, the Weekly Standard came onboard. Those groups either declined to comment or didn’t respond to requests for comment about how much they are paid, and Facebook also declined to comment.
Facebook for years resisted fact-checking content on the site, with CEO Mark Zuckerberg saying he didn’t want the company’s employees to be “arbiters of truth.” The introduction of third-party fact-checkers was an effort in part to insulate the company from criticism that it wasn’t taking misinformation seriously and that it could potentially inject the biases of its employees in the decisions to demote fake news.
Since beginning the program in early 2017, Facebook has expanded it to 19 countries and lately has made several tweaks to make the operation more efficient. It recently introduced the ability for the fact-checkers to vet videos and photos, as well as links.
Ms. Lyons wouldn’t offer an assessment on her team’s efforts to clean up the news feed, but pointed to two recent academic studies, including a collaboration between researchers at Stanford University and New York University that found interactions with fake news stories on Facebook have declined since early 2017. The other, from the University of Michigan, found the overall quality of content on Facebook has improved since mid-2017.
Still, trying to rid the news feed of lies, malicious rumors, fake news and misleading content remains an uphill battle.
“It’s like bringing a spoon to clear out a pig farm,” said P.W. Singer, co-author of the book “LikeWar: The Weaponization of Social Media” and senior fellow at New America, a nonpartisan policy think tank in Washington, D.C. “Facebook is never going to be able to hire enough people, and the artificial intelligence is never going to be able to do all of this on its own.”
At Factcheck, the editing process can be time-consuming to assure that there are no mistakes. Each post is screened by as many as four editors before being published, said Saranac Hale Spencer, one of the two reporters Factcheck hired specifically to work on the Facebook initiative.
When Facebook first set up the initiative, it required that at least two fact-checking organizations agree that something was incorrect before listing it as debunked, but it has since loosened up the requirement in the interest of speed, Ms. Spencer said.
Facebook is also doing more to guide the fact-checkers on which items to address. On Friday, Facebook started testing a system to notify fact-checkers with a push notification when it identifies an item that the company has a high degree of confidence is false. Previously, the fact-checking groups had little guidance in how to choose among the thousands of flagged posts that populate the database at any given time.
COMMENTS