A week after one of the most tumultuous and divided elections in US history, Twitter TWTR released a breakdown of its efforts to monitor and flag misleading tweets about the competition. As part of its Civic Integrity Policy, the social network tagged 300,000 misleading tweets between October 27th and November 3rd, representing 0.2% of all tweets sent about the US election. Only 456 of these tweets received a stronger warning that covered the text and limited user interaction. Separately, the New York Times NYT reported that President Trump was responsible for a significant number of stronger warnings with 34% of his tweets flagged as such between November 3rd and 6th.
Twitter reported that 74% of people who viewed labeled Tweets saw them after an alert was already applied. The social network also saw a 29% decrease in quote tweets from flagged tweets due to a warning displayed before sharing. It also stated that it tried to stay ahead of potentially misleading information by showing all US users a series of bunk prompts in their schedules and search results reminding them that election results were likely to be delayed as well as the fact that voting by mail is safe and legitimate. Twitter and other social networks like Facebook have been heavily criticized for allowing disinformation to spread during the 2016 elections.
* Click below to enlarge (created by Statista)
Results of Twitter’s efforts to combat misinformation in elections from October 27th to November 11th