Social media platforms leave 95% of reported fake accounts up, study finds

One hundred cardboard cutouts of Facebook founder and CEO Mark Zuckerberg stand outside the US Capitol in Washington, DC, April 10, 2018.

Enlarge / One hundred cardboard cutouts of Facebook founder and CEO Mark Zuckerberg stand outside the US Capitol in Washington, DC, April 10, 2018.

It’s no secret that every major social media platform is chock-full of bad actors, fake accounts, and bots. The big companies continually pledge to do a better job weeding out organized networks of fake accounts, but a new report confirms what many of us have long suspected: they’re pretty terrible at doing so.

The report comes this week from researchers with the NATO Strategic Communication Centre of Excellence (StratCom). Through the four-month period between May and August of this year, the research team conducted an experiment to see just how easy it is to buy your way into a network of fake accounts and how hard it is to get social media platforms to do anything about it.

The research team spent €300 (about $332) to purchase engagement on Facebook, Instagram, Twitter, and YouTube, the report (PDF) explains. That sum bought 3,520 comments, 25,750 likes, 20,000 views, and 5,100 followers. They then used those interactions to work backward to about 19,000 inauthentic accounts that were used for social media manipulation purposes.

About a month after buying all that engagement, the research team looked at the status of all those fake accounts and found that about 80 percent were still active. So they reported a sample selection of those accounts to the platforms as fraudulent. Then came the most damning statistic: three weeks after being reported as fake, 95 percent of the fake accounts were still active.

“Based on this experiment and several other studies we have conducted over the last two years, we assess that Facebook, Instagram, Twitter, and YouTube are still failing to adequately counter inauthentic behavior on their platforms,” the researchers concluded. “Self-regulation is not working.”

Too big to govern

The social media platforms are fighting a distinctly uphill battle. The scale of Facebook’s challenge, in particular, is enormous. The company boasts 2.2 billion daily users of its combined platforms. Broken down by platform, the original big blue Facebook app has about 2.45 billion monthly active users, and Instagram has more than one billion.

Facebook frequently posts status updates about “removing coordinated inauthentic behavior” from its services. Each of those updates, however, tends to snag between a few dozen and a few hundred accounts, pages, and groups, usually sponsored by foreign actors. That’s barely a drop in the bucket just compared to the 19,000 fake accounts that one research study uncovered from one $300 outlay, let alone the vast ocean of other fake accounts out there in the world.

The issue, however, is both serious and pressing. A majority of the accounts found in this study were engaged in commercial behavior rather than political troublemaking. But attempted foreign interference in both a crucial national election on the horizon in the UK this month and the high-stakes US federal election next year is all but guaranteed.

The Senate Intelligence Committee’s report (PDF) on social media interference in the 2016 US election is expansive and thorough. The committee determined Russia’s Internet Research Agency (IRA) used social media to “conduct an information warfare campaign designed to spread disinformation and societal division in the United States,” including targeted ads, fake news articles, and other tactics. The IRA used and uses several different platforms, the committee found, but its primary vectors are Facebook and Instagram.

Facebook has promised to crack down hard on coordinated inauthentic behavior heading into the 2020 US election, but its challenges with content moderation are by now legendary. Working conditions for the company’s legions of contract content moderators are terrible, as repeatedly reported—and it’s hard to imagine the number of humans you’d need to review literally trillions of pieces of content posted every day. Using software tools to recognize and block inauthentic actors is obviously the only way to capture it at any meaningful scale, but the development of those tools is clearly also still a work in progress.

Musk takes the stand in first day of “pedo guy” trial

Men in suits walk past a brick wall.

Enlarge / Elon Musk arrives at federal court in Los Angeles on Tuesday, Dec. 3, 2019.

Elon Musk has never been someone to back down from a fight. On Tuesday, Musk’s confrontational personality brought him to a Los Angeles federal courtroom to testify in a defamation lawsuit brought by British cave explorer Vernon Unsworth. Musk told the court that he didn’t intend for people to take it literally when he labeled Unsworth a “pedo guy” on Twitter, a site where he had more than 20 million followers.

Musk’s feud with Unsworth began in July 2018, when Unsworth and Musk were both trying to help a dozen boys trapped in a flooded cave in Thailand. Unsworth, who had years of prior experience with the cave, advised authorities on the rescue effort. Meanwhile, Musk assembled a team of SpaceX engineers to construct a “miniature submarine” to aid in the rescue efforts.

The submarine was never used; rescuers had already rescued the boys by the time it arrived in Thailand. When Unsworth was asked about Musk’s invention on CNN, he scoffed. The contraption had “absolutely no chance of working,” Unsworth said, adding that Musk should “stick his submarine where it hurts.”

Musk responded angrily on Twitter, vowing to demonstrate that the submarine could have squeezed through the tightest passages on the rescue route. “Sorry pedo guy, you really did ask for it,” Musk added.

In his Tuesday court testimony, Musk said that he was merely trading one schoolyard taunt for another. Unsworth’s comments were “an unprovoked attack on what was a good-natured attempt to help the kids,” Musk told the court. “It was wrong and insulting, and so I insulted him back.”

“I thought he was just some random creepy guy,” Musk said, according to Reuters. “I thought at the time that he was unrelated to the rescue.”

“I knew he didn’t literally mean to sodomize me with a submarine, just as I didn’t literally mean he was a pedophile,” Musk said.

“I fucking hope he sues me”

But Unsworth’s lawyers have pointed to a string of subsequent statements that suggest Musk did mean it literally.

“Bet ya a signed dollar it’s true,” Musk tweeted when someone objected to his use of the phrase “pedo guy.” In a tweet a month later, he asked: “You don’t think it’s strange he hasn’t sued me?”

Then in an email to BuzzFeed reporter Ryan Mac, Musk claimed that Unsworth traveled “to Chiang Rai for a child bride who was about 12 years old at the time” and described Unsworth as a “child rapist.” (Unsworth has denied these claims.)

“I fucking hope he sues me,” Musk wrote. Musk labeled his email “off the record.” But Mac, who hadn’t agreed to keep the exchange confidential, published it anyway.

In October this year, Mac revealed Musk’s source for these explosive charges: a $50,000 private investigator Musk hired to dig up dirt on Unsworth. Musk argues that his later statements were based on the investigator’s preliminary findings. But the investigator turned out to have a felony fraud conviction, and he never turned up evidence supporting the claims. Unsworth’s wife says she was actually 33, not 12, when she met Unsworth.

“Joking, taunting tweets”

In his opening statement, Musk’s lawyer argued that his tweets were not allegations of criminal behavior by Unsworth. “They’re joking, taunting tweets in a fight between men,” he said.

He also accused Unsworth of wanting to “milk his 15 minutes of fame,” according to the New York Post.

But an attorney for Unsworth portrayed Musk as vain and vindictive. He said Unsworth sued Musk for “accusing him of being a pedophile in what should have been the proudest moment of his life.” Musk’s tweets caused Unsworth “shame, mortification, worry, and distress,” the lawyer told jurors.

Musk’s high profile has made it difficult for the court to assemble an impartial jury. One potential juror was dismissed because he had an interview scheduled at SpaceX later in the month. Others were dismissed because they followed Musk on Twitter and had followed the case. Another prospect was let go after admitting she had strong opinions about billionaires.