SAN FRANCISCO — The day after the New Hampshire primary last month, Facebook’s security team removed a network of fake accounts that originated in Iran, which had posted divisive partisan messages about the U.S. election inside private Facebook groups.
Hours later, the social network learned the campaign of Michael R. Bloomberg, the billionaire former New York mayor, had sidestepped its political ad process by directly paying Instagram meme accounts to post in support of his presidential bid.
That same day, a pro-Trump group called the Committee to Defend the President, which had previously run misleading Facebook ads, was found to be promoting a photo that falsely claimed to show Bernie Sanders supporters holding signs with divisive slogans such as “Illegal Aliens Deserve the Same as Our Veterans.”
Facebook, Twitter, Google and other big tech companies have spent the past three years working to avoid a repeat of 2016, when their platforms were overrun by Russian trolls and used to amplify America’s partisan divide. The internet giants have since collectively spent billions of dollars hiring staff, fortifying their systems and developing new policies to prevent election meddling.
But as the events of just one day — Feb. 12 — at Facebook showed, although the companies are better equipped to deal with the types of interference they faced in 2016, they are struggling to handle the new challenges of 2020.
Their difficulties reflect how much online threats have evolved since the 2016 election. Russia and other foreign governments once conducted online influence operations in plain sight, buying Facebook ads in rubles and tweeting in broken English, but they are now using more sophisticated tactics such as bots that are nearly impossible to distinguish from hyperpartisan Americans.
More problematic, partisan groups in the United States have borrowed Russia’s 2016 playbook to create their own propaganda and disinformation campaigns, forcing the tech companies to make tough calls about restricting the speech of American citizens. Even well-funded presidential campaigns have pushed the limits of what the platforms will allow.
“They’ve built defenses for past battles, but are they prepared for the next front in the war?” Laura Rosenberger, the director of the Alliance for Securing Democracy, a think tank that works to counter foreign interference campaigns, said of the tech companies. “Anytime you’re dealing with a sophisticated actor, they’re going to evolve their tactics as you evolve your defenses.”
By most accounts, the big tech companies have gotten better at stopping certain types of election meddling, such as foreign trolling operations and posts containing inaccurate voting information. But they are reluctant to referee other kinds of social media electioneering for fear of appearing to tip the scales. And their policies, often created hastily while under pressure, have proved confusing and inadequate.
Adding to the companies’ troubles is the coronavirus pandemic, which is straining their technical infrastructure, unleashing a new misinformation wave and forcing their employees to coordinate a vast election effort spanning multiple teams and government agencies from their homes.
In interviews with two dozen executives and employees at Facebook, Google and Twitter over the past few months, many described a tense atmosphere of careening from crisis to crisis to handle the newest tactics being used to sow discord and influence votes. Many spoke on the condition of anonymity because they were not authorized to publicly discuss sensitive internal issues.
Some Facebook and Google employees said they feared being blamed by Democrats for a Trump re-election, while others said they did not want to be seen as acting in Democrats’ favor. Privately, some said, the best-case scenario for them in November would be a landslide victory by either party, with a margin too large to be pinned on any one tech platform.
Google declined to speak publicly for this article. Nathaniel Gleicher, Facebook’s head of cybersecurity policy, said the threats of 2016 were less effective now but “we’ve seen threat actors evolving and getting better.” Twitter also said the threats were a game of “cat and mouse.”
“We’re constantly trying to stay one step ahead,” said Carlos Monje Jr., Twitter’s director of public policy.
Facebook Locks Down
Mark Zuckerberg, Facebook’s chief executive, ordered a “lockdown” for hundreds of employees late last year.
A lockdown is Facebook-speak for a period of intense, focused effort on a high-priority project. The workers, who included engineers and policy employees, were ordered to drop other projects and build tools to prevent interference in the 2020 election, said two people with knowledge of the instructions.
For Mr. Zuckerberg, who once delegated the messy business of politics to his lieutenants, November’s election has become a personal fixation. In 2017, after the extent of Russia’s manipulation of the social network became clear, he vowed to prevent it from happening again.
“We won’t catch everyone immediately, but we can make it harder to try to interfere,” he said.
Facebook has since required anyone running U.S. political ads to submit proof of an American mailing address, and included their ads in a publicly searchable database. It has invested billions to moderate content, drawn up new policies against misinformation and manipulated media, and hired tens of thousands of safety and security workers.
In the 2018 midterm elections, those efforts resulted in a relatively scandal-free Election Day. But 2020 is presenting different challenges.
Last year, lawmakers blasted Mr. Zuckerberg for refusing to fact-check Facebook posts or take down false ads placed by political candidates; he said it would be an affront to free speech. The laissez-faire approach has been embraced by some Republicans, including President Trump, but has made Facebook unpopular among Democrats and civil rights groups.
Still, Facebook’s rank-and-file workers are cautiously optimistic. In late January, just before the Iowa caucuses, a group of employees gathered at the company’s headquarters for a party to celebrate the end of the lockdown.
For hours, they ate, drank, and watched a talent show featuring employee-led musical acts and improv comedy sketches. An Iowa state flag hung on the wall.
At one point, said two people who attended, a surprise guest entered: Mr. Zuckerberg, who stopped by to thank the team for its work.
Gaps in New Armor
Just after noon last Oct. 30, Jack Dorsey, Twitter’s chief executive, posted a string of 11 tweets to announce he was banning all political ads from the service.
“Paying to increase the reach of political speech has significant ramifications that today’s democratic infrastructure may not be prepared to handle,” he wrote.
His zero-tolerance move was one action that Twitter and companies like Google have taken to stave off another election crisis — or at least to distance themselves from the partisan fray.
Over the past year, Twitter has introduced automated systems to detect bot activity and has taken down Russian, Chinese, Venezuelan and Saudi bots. The company also prohibited users from posting information illegally obtained through a security breach.
And this month, Twitter enforced new guidelines to label or remove deceptively edited videos from its site.
“We’re moving away from a model of waiting for a report to spotting patterns of behavior that can spot stuff before it catches fire,” Mr. Monje said.
Google, which owns YouTube, also altered its policies to prevent foreign-backed disinformation campaigns and introduced transparency measures for political ads.
The changes are evident in how the Infowars conspiracy theorist Alex Jones and the Kremlin-linked news outlet RT — two of YouTube’s most popular political newscasters in 2016 — no longer wield outsize influence on the site. Once YouTube tightened its hate speech policies, it banned Mr. Jones and other repeat offenders, and tweaked its recommendation algorithm to promote more authoritative news and fewer conspiracy theories.
Google security engineers said they were embedded in every corner of the company to look for Russian-style influence campaigns. They deliver daily threat briefings to executives and are conducting “red-team” drills to practice responding to hypothetical election-meddling scenarios, like hackers potentially manipulating the Google Maps locations of polling places on voting day.
Yet gaps remain in the tech platforms’ armor.
Government officials and former employees said Twitter’s algorithms were not reliably distinguishing between bots and humans who simply tweet like bots. Its efforts to label manipulated media have been underwhelming, said election campaigns. And some Twitter employees tracking election threats have been pulled away to triage misinformation about the coronavirus, such as false claims about miracle cures.
Threats have also emerged in unexpected places. In December, The New York Times revealed foreign spies were hiding in plain sight inside app stores from Google and Apple. Millions of users worldwide had downloaded a popular app, ToTok, which was leaking audio, photos, texts, and contacts to United Arab Emirates intelligence officials through a network of Emirati contractors.
Apple removed ToTok, but Google reinstated the app two weeks later. For six more weeks, Emirati spies continued siphoning off Google users’ data, said security experts and intelligence officials.
Google, which declined to comment on ToTok, eventually removed it from its app store last month.
The Russian Connection
Tracing interference attempts to Russia, or any other country, has become increasingly difficult.
For Facebook, Google and Twitter, the complications were clear through the evolving tactics of Russia’s Internet Research Agency, the troll farm that meddled online in 2016. Its trolls once barely made any attempt to hide themselves online, with misspelled posts riddled with poor grammar.
Now the Russian group has better disguised itself, posting divisive messages stolen from American sites or publications. The trolls may now also be paying Americans to post information on their behalf, to better hide their digital tracks.
In one Facebook influence campaign in Africa last year, the Russian group appeared to pay locals to attend rallies and write favorable articles about its preferred candidates.
“Figuring out who is behind these campaigns can take months, years even,” said Yoel Roth, Twitter’s head of site integrity.
To connect the dots, security executives from Twitter, Google, Facebook, Yahoo and other companies said they were meeting regularly with the Department of Homeland Security, the F.B.I. and the Office of the Director of National Intelligence. They were also trading intelligence and discussing threats over encrypted chat messages with one another.
“I talk to them more than I talk to my husband,” Mr. Roth said of his counterparts at Facebook, Google and other companies.
Disinformation Goes Domestic
The most divisive content this year may not come from Russian trolls or Macedonian teenagers peddling fake news for clicks, but from American politicians using many of the same tactics to push their own agendas.
One chief perpetrator? The White House.
Last month, Mr. Trump and other Republicans shared a video of Nancy Pelosi, the House speaker, during the president’s State of the Union address. Ms. Pelosi had ripped up a copy of Mr. Trump’s speech at the end of the address. But the video was edited so it appeared as if she had torn up the speech while he honored a Tuskegee airman and military families.
A spokesman for Ms. Pelosi called for the video to be removed from Facebook and Twitter, saying it was “deliberately designed to mislead and lie to the American people.” But the companies said the video did not violate their policies on manipulated media.
This month, Dan Scavino, the White House social media director, shared another selectively edited video. It showed former Vice President Joseph R. Biden Jr. appearing to say, “We can only re-elect Donald Trump.” In fact, the full video showed Mr. Biden saying Mr. Trump would only get re-elected if Democrats resorted to negative campaigning.
Facebook marked the video as partly false and limited its spread, but did not remove it. By the time Twitter labeled it as manipulated, it had been viewed more than five million times. Because of a glitch, some Twitter users did not see the label at all.
“The Biden video wasn’t manipulated, and if Nancy Pelosi didn’t want to see video of herself ripping up the speech, she shouldn’t have ripped up the speech,” said Tim Murtaugh, a spokesman for the Trump re-election campaign. He suggested that Twitter’s efforts to label the video were evidence of bias.
Democrats have also pushed the envelope to get messages out on social media. Mr. Bloomberg’s presidential campaign, which he suspended this month, caused headaches for the tech platforms, even as they took in millions of dollars to run his ads.
Among his campaign’s innovations was buying sponsored posts from influential Instagram meme accounts and paying “digital organizers” $2,500 a month to post pro-Bloomberg messages on their social media accounts. The campaign also posted a video of Mr. Bloomberg’s presidential debate performance, which had been edited to create the impression of long, awkward silences by his opponents.
Some of the tactics seemed perilously close to violating the tech companies’ rules on undisclosed political ads, manipulated media and “coordinated inauthentic behavior,” a term for networks of fake or suspicious accounts acting in concert.
Facebook and Twitter scrambled to react, hastily patching together solutions, including requiring more disclosure — or taking no action at all.
By then, the Bloomberg campaign, which declined to comment, had set a new playbook for other campaigns to follow.
“We can’t blame Russia for all our troubles,” said Alex Stamos, Facebook’s former chief security officer who now researches disinformation at Stanford University. “The future of disinformation is going to be domestic.”
The Tensions Within
Inside the tech companies, people charged with protecting the election have at times clashed with those whose job is to keep lawmakers happy, partly by avoiding the appearance of partisan bias.
At Facebook, those tensions spilled out last year.
In November and December, members of Facebook’s security team clashed with the policy team, whose Washington-based leadership includes several former Republican operatives, over a network of Facebook accounts, groups and pages run by The Daily Wire, a right-wing media company started by the conservative pundit Ben Shapiro.
Facebook’s security team had found The Daily Wire and other similar networks used tactics commonly associated with disinformation networks, including coordinating messaging and posts without indicating they were centrally administered, said people with knowledge of the findings.
Some security team members wanted an expanded mandate to investigate hyperpartisan networks based in the United States, the people said. But the policy team discouraged them and made it clear that foreign influence operations took priority over domestic ones, they said.
Part of the policy team’s concern, said one employee who participated in the discussions, was that taking action against a prominent right-wing network could set off a Republican backlash.
Mr. Gleicher, of Facebook, said he did not recall tensions over The Daily Wire, adding that the investigation found the site did not meet the threshold for enforcement. He also disputed that Facebook had discouraged investigations into domestic influence operations because of possible political fallout.
“We make decisions based on behavior,” he said. “Whether it’s foreign or domestic, the question is, are they engaged in these consistent behaviors?”
The specter of partisan backlash surfaced again this month, when Mr. Trump’s re-election campaign ran Facebook ads asking people to take an “Official 2020 Congressional District Census.” In fact, the ads linked to a Trump campaign survey.
That prompted an uproar. Civil rights groups said the ads could mislead voters by suggesting they were connected to the official U.S. census.
Over a frenetic 48 hours, Facebook went into damage control. Although the social network has said it would not fact-check political ads, it also prohibits misinformation about the census.
The policy team initially decided the Trump census ads did not violate Facebook’s rules. But a day later, under fire for inaction, a senior Facebook executive reversed the call.
The ads came down, after all.