The Twitter application is seen on a phone screen on Aug. 3, 2017. (Thomas White/Reuters)
A swarm of bots on Twitter sent out identical, co-ordinated tweets this week about a proposed smart city project in Toronto.
After the tweets caught the attention of journalists and privacy experts, Twitter suspended the accounts, but the moment provided a valuable glimpse into the inner workings of these kinds of networks and the techniques being used to make bots harder to detect.
On Monday, freelance journalist Sean Craig noticed that dozens of Twitter accounts were tweeting identical messages and sharing a link to a press release about a controversial smart neighbourhood that’s been proposed by Sidewalk Labs, a subsidiary of Google’s parent company Alphabet.
A selection of the identical tweets sent from accounts that appear to be bots, which Twitter later disabled. (Screengrab/Twitter)
“I write about Sidewalk Labs from time to time, so I regularly search Twitter to see if anything interesting has been said or written about them,” Craig said via email. “I happened to search at a time when that bot network was posting that article. It was merely serendipitous timing.”
The press release had been published on the website for the Future of Privacy Forum (FPF), a non-profit think-tank that focuses on data privacy and has received funding from Google. Immediately, FPF noticed Craig’s tweet and started to investigate, according to John Verdi, the vice-president of policy for FPF.
“We all sort of looked at each other and said, ‘I didn’t buy a botnet, did you buy a botnet?'” Verdi said.
Craig and the team at FPF reached out to Sidewalk Labs, which said it had not hired any kind of botnet either.
“We had nothing to do with the bots,” Keerthana Rang, the associate director of communications for Sidewalk Labs, confirmed in an email to CBC News. “If you take a look at their Twitter feeds, they tweet out a wide assortment of articles about privacy unrelated to Sidewalk Labs.”
Verdi says they reported the accounts to Twitter, and Twitter suspended them. But the whole encounter piqued the interest of Verdi and others at FPF, whose research often intersects with issues surrounding social media.
Verdi noted that many of the accounts used images of people wearing sunglasses, a clue that they may have been photos generated using a neural network rather than real photos of people. “Eyes are hard,” Verdi said.
The profile of one of the accounts that shared the identical tweet and was later suspended by Twitter. (Screengrab/Twitter)
The handles also all followed a similar format of a first name, last name, and a number or string of numbers. All the accounts had bios written in a similar style, comprising three or four short descriptors, which is the style produced through a popular online generator that creates fake profiles.
CBC News tested a sample of the profiles on Botometer, a university-developed online tool that measures how likely it is that a Twitter account is a bot, and each received a high rating.
A sample of the accounts rated high on Botometer. (Screengrab/Botometer)
Verdi said he believes the bots were part of a network that was being built to be sold or rented, and sharing the Sidewalk Labs story was part of that strategy.
“My best guess is that some of the folks who run these botnets are creating subject-matter-specific networks, and they’re creating networks of bots that are posing as news aggregators,” Verdi said. “My bet is that whomever ran this thing was developing it for use later, that the use we saw was not actually the ultimate use. My bet is that they were waiting for someone to buy or rent it.”
Having accounts that are well established, with a history of tweets on a particular topic and a handful of followers, makes the botnet more valuable because it’s harder to detect, Verdi said. CBC News was not able to identify who was behind the botnet.
Bots exist on nearly all social media platforms and are far from a new phenomenon, but creating convincing networks of thousands of bots has become easier, and that has created a black market for these swarms of fake accounts.
Networks of bots can be bought and sold for as little as $45, and are easily found online in the surface web (as opposed to the dark web). Accounts that are older, are U.S.-based, have established profiles, or have linked phone accounts are more valuable, as they’re less likely to be detected.
In the past, a Twitter account that was brand new, with no followers and no photo, raised a red flag that it was a bot. Later, as bots started using photos to legitimize the accounts, one could identify a fake account by running a reverse image search on the photo used in the profile to see if it was stolen or a stock image.
But the people behind these bots are getting smarter, and are now creating robust profiles with unique photos that are a lot harder to quickly spot as fake.
As we head into a federal election in October, the threat of a botnet spreading disinformation or stoking political divides among Canadian voters is something the Communications Security Establishment has identified as a potential risk. And while the network got caught this time and shut down, next time, they might not be as easy to detect.
Have you spotted something fishy online? Send your disinformation news tips to firstname.lastname@example.org