By Elizabeth Dwoskin, Cat Zakrzewski, Tyler Pager
In tense meetings between Facebook executives and White House officials tasked with fighting the pandemic in recent months, President Joe Biden's team begged the social network giant for something only Facebook possessed: its data.
Facebook's vast data trove could be crucial to understanding how medical misinformation has spread rampantly on the company's platforms - and more importantly, to overcoming the skepticism about vaccines that had arisen from it, according to three administration officials who spoke on the condition of anonymity to discuss sensitive matters. Discussions came to a head last month, when President Biden fell short of his vaccination goals and said platforms like Facebook were "killing people." (He subsequently backed away from the comment.)
Officials thought Facebook was hiding, filibustering and deflecting, according to three people involved in the discussions, in contrast to the White House's conversations with YouTube and Twitter about anti-vaccine misinformation, which the officials thought were more productive.
The Biden team, according to the three officials, asked how many people had been exposed to misinformation about covid-19 on Facebook and its sister platforms, Instagram and WhatsApp. How many users were still sitting on the fence about whether to take the vaccine? And when Facebook blocks its algorithm from spreading unwanted content, how many people are still exposed to it?
For almost as long as Facebook has had its singular cache of data about the behavior and attitudes of billions of people, outsiders have sought to obtain it. But, increasingly, the social network is taking steps to restrict access to the very data needed by the public to understand the scope of the problems and to potentially combat them, some experts and insiders say.
That struggle over data has taken on new urgency in a year when the delta variant has invigorated the pandemic and insurgents stormed the Capitol. Facebook has emerged as central to the story, researchers say, because its platform played a key role in fomenting major social harms. But Facebook this month cut off researchers who were accessing Facebook data, drawing a rebuke from the Federal Trade Commission. The company has shut down access to an internal report on the insurrection and considered reducing the scope of a product that is used by the public to examine the company's role in promoting health misinformation and the events of Jan. 6, according to internal documents viewed by The Post and interviews with former employees.
On Wednesday the company went out on the defensive again, publishing a "transparency" report and a blog post that attempted to refute the idea - emerging from an advocacy group and mentioned by Biden - that just 12 influencers were responsible for the majority of anti-vaccine content on the platform. The blog post did not address the questions that the White House and other experts have asked about how much misinformation about the coronavirus is on the platform and who is actually spreading it. The report, a set of curated lists of the most popular content on the network, pushed back against stories that the platform is rife with misinformation or too friendly to conservatives.
The discussions with the White House went in circles, according to the officials.
"It's not that they wouldn't provide data," said Andy Slavitt, who served as a senior pandemic adviser on the White House's coronavirus team and participated in the meetings. "It's that they wouldn't provide meaningful data, and you end up with a lot of information that doesn't necessarily have value."
In a statement, Facebook spokesman Andy Stone said that "the suggestion we are trying to hide or prevent research into the role our platform plays is anecdotal and inconsistent with the facts," noting that the company is partnering with more than 300 academics around the world on various research projects.
Facebook said it shared with the White House public survey data, collected through an academic partnership, about attitudes toward vaccination among Facebook users, as well as the fact it has removed more than 20 million pieces of content for breaking rules against coronavirus misinformation. On Wednesday, CEO Mark Zuckerberg again cited the takedown number in a CBS interview, though the answer was a dodge because the question he was asked was about the total universe of misinformation on Facebook.
As long as tech platforms can control the data, they wield great power over efforts to hold them accountable, said Rebekah Tromble, director of the Institute for Data, Democracy and Politics at George Washington University. Without data, researchers and experts are unable to effectively examine claims about their role in vaccine misinformation and other ills.
"Until we get a better handle on what the potential harms are, or the scale of potential harms, we can't do anything to fix them," Tromble said.
For years, Facebook gave researchers and developers broad access to its data. The company thought its service could become even more powerful if an ecosystem of outsiders was dependent on it, and more valuable if researchers could use it to come up with major insights about social behavior.
But in recent years, as Facebook has faced one controversy after another, it has become more cautious. In 2017, it limited some features of a Facebook-owned tool called CrowdTangle, which tracks engagement with Facebook posts, after a researcher used the tool to estimate how many Americans were exposed to content from Russian operatives attempting to manipulate the 2016 election. (The company said at the time that it had "fixed a bug" in CrowdTangle and, under pressure, soon disclosed its own measure of the number of people exposed to Russian disinformation.) It later built a library of political ads, but initially limited the way that outsiders could query it.
Since 2018, the company has further restricted developers and academics in what data they can obtain, in response to revelations that a researcher had inappropriately obtained tens of millions of Facebook profiles on behalf of the Cambridge Analytica political consultancy. The incident resulted in a $5 billion privacy settlement with the Federal Trade Commission the following year.
Some experts and former executives say that image-conscious Facebook has become paralyzed by fear of regulation - by the idea that the more they share, the more problems they potentially expose. The result could invite greater regulation of the platform as well as another round of public relations crises.
"There are millions of people who think the election is stolen. Millions of people who have been convinced not to take vaccines," said Laura Edelson, the New York University researcher whose misinformation project, the Ad Observatory, was restricted by Facebook. Blocking academics, nonprofits and the White House "feels like Facebook has gone into this defensive crouch where they are not interested in letting anyone help them with this very severe problem."
Dialogues with the Biden team date back to shortly after the presidential election, before vaccines were widely available, as the incoming White House sought to be proactive about misinformation on tech platforms, the Biden officials said.
Some details of the struggles between Facebook and the White House over vaccine misinformation were first reported by the New York Times.
The two camps had a history of tension. Biden was critical of the company's handling of disinformation throughout the election season, and Democratic campaigns generally thought Facebook was friendlier to Republicans. But both sides sought to restart a conversation in good faith, according to the officials and to a person familiar with Facebook's thinking who spoke on the condition of anonymity to discuss sensitive matters.
There was a precedent to the dialogues. Facebook had worked with the White House and government agencies in responding to other public emergencies in the past, such as the threat of the Islamic State extremist group and other Islamic extremism. Tech companies including Facebook, YouTube and Twitter formed a partnership in 2016 to address terrorism threats, including compiling a database where they shared data with the government and one another about violent terrorist images or recruiting videos that they removed from their platforms.
And ahead of the 2018 midterm elections, Facebook also hosted a meeting with law enforcement officials, including from the FBI, to discuss efforts to prevent Russian interference on tech platforms.
But the social network was less forthcoming when it came to the pandemic, the officials said. Over the course of several months, the Biden officials, including Surgeon General Vivek Murthy and DJ Patil, a prominent Silicon Valley executive and former U.S. chief data scientist who was serving as the Biden transition team's chief technology officer, began asking for specific information.
Instead of responding directly to the questions, Facebook repeatedly tried to turn the discussion to numbers that cast the company in a positive light, such as a statistic that more than 2 billion people had viewed authoritative information about the coronavirus on its platform, according to the people familiar with the conversations. Facebook also shared its policy of removing coronavirus-related health misinformation when it becomes aware of it, as well as linking users who share information about the virus to posts by public health organizations, Facebook's Stone said. He said the company had been working with the White House for many months to help people get vaccinated.
But the Biden officials thought Facebook was stonewalling them, according to the three officials.
The officials thought Twitter's strike policy, under which it suspends users' accounts when they repeatedly run afoul of its rules about health misinformation, was clearer than Facebook's.
Members of Twitter's public policy team meet regularly with the White House and various officials across the administration, said Twitter spokeswoman Katie Rosborough. "The Surgeon General recently outlined areas of action for tech platforms, and much of our existing work aligns with those goals," she said in a statement.
YouTube also shared information about its own failings, according to the Biden officials. The company shared some data about how widely coronavirus misinformation that broke its rules was viewed, according to YouTube spokeswoman Ivy Choi.
But the debates between Facebook and the White House throughout the spring and into summer also gave rise to a broader and still unresolved disagreement over what constitutes misinformation, according to the person familiar with Facebook's thinking. Facebook strongly believes people should have the right to broadly express themselves without censorship on social platforms, and had reviewed research that shows that friends and family can often be more effective at countering misinformation than official sources that people distrust. The Facebook executives thought the Biden camp was going too far, by identifying specific pieces of content as problematic and asking it to potentially suppress valuable conversations where people express fears and skepticism.
The conversations also turned to superspreaders, or influencers who are responsible for a large share of anti-vaccine content. In March, The Post reported on internal Facebook data that showed a small band of groups and influencers was responsible for the majority of vaccine skepticism, rather than everyday people expressing concerns.
In May, the nonprofit Center for Countering Digital Hate found that a dozen influencers were responsible for 65 percent of the anti-vaccine content on Facebook and Twitter. The report was cited by Biden as part of his reasoning for attacking Facebook.
After Biden mentioned the CCDH report about the "disinformation dozen," Facebook executives were rattled, according to another person familiar with the company's thinking. In discussions, executives sought to find data that would disprove the influence of the 12 anti-vaccine figures, the person said. Information that would put the company in a negative light was discarded, and efforts to provide the White House with comprehensive data on the prevalence of covid-19 misinformation were rejected.
Facebook's blog post Wednesday said the report was deeply flawed and that the 12 people were responsible "for about just 0.05% of all views of vaccine-related content" on the service.
This summer, a poll by the COVID States Project, a collaboration among academics from several universities, found that Facebook users were more vaccine-hesitant than any other group of media consumers, including consumers of Fox News. The poll, said two of the people in the Biden camp, solidified the White House's perception that the world's largest social network was not being forthright. Facebook has said the poll was "sensationalized" and not representative of the U.S. population.
The battle with the White House played out against the backdrop of other data struggles, both with outside researchers and with its own employees.
In June 2020, Facebook informed NYU's Edelson that the Ad Observatory team was in violation of the company's terms of service because it collected information about advertisers, which the company considers to be personal information on par with Facebook profiles (Edelson believes advertiser profiles and behavior should be considered public). In October 2020, NYU received a cease-and-desist order from Facebook.
The parties had been negotiating on and off for several months when in August, Edelson wrote Facebook product managers what she thought was a routine email checking on the availability of news articles that appeared in the week before and after Jan. 6 - part of an effort to understand how local media may have contributed to a climate that led to it.
Facebook responded that a bug had restricted the articles. But a few hours later she received a harsh note: Facebook was going to shut the project down.
AlgorithmWatch, a German nonprofit tech watchdog, had a similar experience over the past year, according to a news release it posted this month. The organization, which has used its own widget, downloaded by 1,500 volunteers, to show that Instagram's algorithm encourages users to post photos showing more skin, was told by Facebook in May that it violated the terms of service. Faced with the prospect of losing access to all their Facebook accounts, AlgorithmWatch elected to shut down the project in July.
And internally, a former Facebook executive, Brian Boland, who oversaw the CrowdTangle tool for researchers and journalists, said company officials were increasingly growing unhappy with how the tool was being used to show negative information about the company. He said executives engaged in conversations about potentially limiting new features. Boland quit the company in November.
Facebook's Stone did not deny the conversations but pointed out that Facebook has not restricted or limited any aspect of CrowdTangle this year and has no plans to shut it down. In the spring, the company restructured the CrowdTangle team. The debates were first reported in the New York Times.
"Facebook wants to select data points that would tell a story that they want to tell, but that's not the same as actual transparency - as saying, 'Here's a bunch of public data where you can see what is happening on the platform so we can solve this once and for all,' " Boland said.
The Washington Post