How bots threaten to derail the 2020 U.S. elections

0

In an article published last year in Nature Communications magazine, New York City College researchers examined the prevalence of fake news on Twitter in the months leading up to the 2016 US election, based on a record of 171 million tweets, out of 11 million unique ones When accounts were sent, the team tried to quantify the importance of tweets and top news spreaders, and concluded that bots tweeted links to fake news sites at a higher rate than any other group.

This finding wasn’t new – the role bots play in spreading false and misleading information was already well established. Research by Indiana University scientists found that over a 10-month period between 2016 and 2017, bots addressed influential users through responses and mentions to reveal untrue stories before going viral. During the 2017 Catalan independence referendum in Spain, bots generated and advertised violent content aimed at users demanding independence.

However, with the 2020 US elections approaching, experts are concerned that bots themselves will be dodging fairly elaborate filters to amplify misleading information, disrupt voting efforts, and create post-election confusion. While people in academia and the private sector continue to pursue new techniques to identify and disable bots, it remains unclear to what extent bots can be stopped.

Campaigns

Bots are used around the world today to plant seeds of unrest, either by spreading misinformation or by raising controversial views. A 2019 report by the Oxford Internet Institute found evidence that bots were spreading propaganda in 50 countries, including Cuba, Egypt, India, Iran, Italy, South Korea and Vietnam. Between June 5 and June 12 – ahead of the UK referendum on leaving the EU (Brexit) – researchers estimated that half a million tweets on the subject were from bots.

Bots continue to plague social media in the US, most recently against the coronavirus pandemic and the Black Lives Matter movement. A Carnegie Mellon team found that bots can account for up to 60% of the accounts discussing COVID-19 on Twitter, advocating false medical advice, conspiracy theories about the virus, and efforts to end bans. Bot Sentinel, who tracks bot activity on social networks, observed new reports of disinformation campaigns on Black Lives Matter in early July, including false claims that billionaire George Soros is funding the protests and that the George Floyd assassination was a hoax.

But the activity that is perhaps most relevant to the upcoming election came last November when “cyborg” bots spread misinformation during the Kentucky local elections. (Cyborg accounts attempt to evade Twitter’s spam detection tools by transmitting some tweets from a human operator.) VineSight, a company that tracks misinformation on social media, uncovered small networks of bots that are retweeting and retweeting the messages that raised doubts about the governors’ results before and after polls closed.

A separate Indiana University study sheds light on how social bots such as those identified by VineSight work. The bots pounce on fake news and conspiracy theories in the seconds after they are posted and generally tweet them to encourage human users to retweet afterwards. Then the bots mention influential users to get them to re-share tweets (legitimizing and reinforcing in the process). The co-authors introduce a single bot that mentioned @realDonaldTrump (President Trump’s Twitter handle) 19 times, referring to the false claim that millions of votes were cast by illegal immigrants in the 2016 presidential election.

Why are people so vulnerable to bot-driven content? The Indiana University study speculates that novelty may play a role. Novel content that attracts attention because it is often surprising and emotional can give the sharer, who is seen as a “connoisseur”, social status. False news also leads to more surprise and disgust than truthful news and motivates people to engage in reckless exchange behavior.

In recognition of this psychological component, Twitter recently conducted an experiment that asked users to read all of the content of articles before being retweeted. As of May, the social network prevented a subset of users from retweeting a tweet with a link before opening on that link. After a few months, Twitter concluded that the experiment was a success: users opened articles before sharing them 40% more often than without prompting.

Not all bots are created equal

In anticipation of campaigns targeting the 2020 US election, Twitter and Facebook say they have made progress in detecting and removing bots that advertise false and malicious content. Yoel Roth, head of integrity at Twitter, says the company’s “proactive work” has resulted in “significant gains” in fighting tampering across the network, with the number of suspicious bot accounts increasing 9% this summer from the previous reporting period has decreased. In March, Facebook announced that one of its AI tools has helped identify and disable over 6.6 billion fake accounts since 2018.

Some bots are easy to spot and remove, while others are not. In a keynote address at this year’s Black Hat security conference, Stanford Internet Observatory research manager Renee DiResta noted that bot campaigns orchestrated by China tend to be less effective than the Russian efforts. This is in part because China remains difficult to use certain tactics on Western platforms banned in China. DiResta pointed out that Chinese bots often have blocks of related usernames, stock profile photos, and primitive biographies.

While Russia’s high profile social media endeavors have smaller audiences overall (e.g., 6.4 million RT Facebook followers versus China Daily’s 99 million), they are looking around for their reliance on memes and other “snacks” an order of magnitude higher content. “Russia is currently best in class in information operations,” said DiResta. “You are spending a fraction of the budget that China has.”

Back in March, Twitter and Facebook revealed evidence that Russian bots were becoming more sophisticated and difficult to spot. Facebook said it closed a network of cyborg accounts, posting on topics from black history to gossip and fashion, operated by people in Ghana and Nigeria on behalf of agents in Russia. And some of the accounts on Facebook tried to pose as legitimate non-governmental organizations (NGOs). Meanwhile, Twitter said it removed bots highlighting false news about race and civil rights.

The twin reports followed a study by the University of Wisconsin-Madison that found that Russia-linked social media accounts were posting the same Flashpoint topics such as racial relations, gun laws and immigration as they did in 2016. “For ordinary users, this is also subtle to spot the differences, ”Kim told the Wisconsin State Journal in an interview earlier this year. “By imitating local actors with similar logos (and similar names), they are trying to avoid verification.”

Mixed findings

Despite the recognized proliferation of bots ahead of the 2020 US election, their reach is still the subject of debate.

First, the challenge is to define “bots”. Some define them as strictly automated accounts (like news aggregators) while others include software like Hootsuite and cyborg bots. These differences of opinion are manifested in bot analysis tools such as the Fake Followers Audit tool from SparkToro, Botcheck.me, Bot Sentinel and BotSight from NortonLifeLock, each based on different detection criteria.

In a statement sent to Wired about a bot identification service developed by University of Iowa researchers, Twitter denied the notion that third-party services can accurately detect bot activity without access to their internal records. “Research based solely on publicly available information about accounts and tweets on Twitter often cannot paint an accurate or complete picture of the steps we are taking to enforce our developer guidelines,” a spokesman said.

The lead author of the Carnegie Mellon study, Emilio Ferrara, generally agrees with the findings of the University of Iowa researchers. “As social media companies make more effort to curb abuse and suppress automated accounts, bots evolve to mimic human strategies. Advances in AI allow bots to produce more human content, ”he said. “We need to make more efforts to understand how bots evolve and how more sophisticated ones can be detected. With the upcoming 2020 US elections, the integrity of the social media discourse is of the utmost importance to enable a democratic process without outside influence. “

To be clear, there is no silver bullet. As soon as Twitter and Facebook remove bots from their roles, new ones emerge to take their place. And even if fewer automated filters go through than before, fake news spreads quickly without much help from the bots that share it. Vigilance – and more experimentation along the lines of encouraging someone to share it on Twitter – could be partial antidotes. With election day running out, it is safe to assume that the effort will be utterly inadequate.

Leave A Reply

Your email address will not be published.