Poll chancers: social media sites have limited success in blocking phony US election claims

Poll chancers: social media sites have limited success in blocking phony US election claims
Short Url
Updated 05 November 2020

Poll chancers: social media sites have limited success in blocking phony US election claims

Poll chancers: social media sites have limited success in blocking phony US election claims
  • Trump’s allegations and threats continued as the sun rose over Washington on Wednesday
  • Health fears about the coronavirus pandemic caused many states to make it easier to vote by mail

Ahead of the US elections, Facebook, Twitter and YouTube promised to clamp down on misinformation, including unsubstantiated charges of fraud and premature declarations of victory by candidates. And for the most part they did just that, albeit not without a few hiccups.

However the steps they took did not do enough to address the fundamental problems exposed by the 2020 presidential contest, according to critics of the social-media platforms.

“We’re seeing exactly what we expected, which is not enough, especially in the case of Facebook,” said Shannon McGregor, an assistant professor of journalism and media at the University of North Carolina.

One big test emerged early on Wednesday morning. As counts continued in close contests in key battleground states, including Wisconsin, Michigan and Pennsylvania, President Donald Trump gave a speech in the White House to cheering supporters in which he prematurely declared victory, even though millions of votes were still uncounted, made unsubstantiated claims of electoral irregularities, and declared he would challenge in court poll results that had not even been announced yet. He also posted similarly misleading statements about the election on Facebook and Twitter.

It was the culmination of months of Trump spreading unfounded allegations and suspicions about the increase in mail-in voting as a result of the COVID-19 pandemic, and calling for the final election result to be announced quickly after polls closed on Nov. 3. However, while many states were able to count mail-in votes in advance of election day, others were prohibited from doing so by state law.

This slowed counts in some key states and made it difficult to gauge which candidate was leading. In general, Trump was favored by more people who voted in person on election day, and those votes were counted first in some battleground states, while Democratic challenger Joe Biden did better in mail-in and advance votes, which were counted later.

So what did tech companies do about untrue claims and unfounded allegations on election night? For the most part, they did what they said they would, which primarily meant adding labels that flagged demonstrably false or misleading election posts and pointed users to more reliable sources of information.

In the case of Twitter, this sometimes included covering up offending posts and forcing readers to click through warnings to see them. On Facebook and YouTube, it mostly involved adding more accurate and authoritative information to contentious election-related posts.

For example, Google-owned YouTube allowed footage of Trump’s White House speech, which was also broadcast by many traditional news channels, to be posted but added an “information panel” beneath the videos that pointed out that election outcomes might not be final and added a link to Google’s official results page.

“They’re just appending this little label to the president’s posts — but they’re appending those to any politician talking about the election,” said McGregor, who said that by broadcasting falsehoods just because they came from the president, the tech giants and traditional media outlets were shirking their responsibility to curb the spread of misinformation about the election.

“Allowing any false claim to spread can lead more people to accept it once it’s out there,” she added.

Trump’s social-media posts were not the only ones that attracted warning labels. A Twitter post by Republican Senator Thom Tillis in which he declared a premature reelection victory in his Senate race in North Carolina was also flagged. So was a post by a Democratic official that claimed Biden had won Wisconsin when it was too early to do so.

Trump’s allegations and threats continued as the sun rose over Washington on Wednesday. By late morning, he was tweeting unfounded claims that his early lead in some states seemed to “magically disappear” as the night went on and more ballots were counted, clearly implying some kind of unsubstantiated impropriety.

Twitter quickly slapped a warning on the tweets that said: “Some or all of the content shared in this tweet is disputed and might be misleading about an election or other civic process.” It was one of at least three such alerts Twitter added to Trump tweets on Wednesday, which meant his posts could not be read without first seeing the warning. The site did the same to a post from another individual that Trump shared.

The likelihood of delays in counting votes in some states had been widely predicted for months. Health fears about the coronavirus pandemic caused many states to make it easier to vote by mail, and millions of people chose to do so rather than risk casting their ballot in person. As a result there were many more mail-in ballots than in any previous election, which can take longer to count.

In a message posted on Sept. 3, Facebook CEO Mark Zuckerberg said that if any candidate or campaign officials tried to declare victory prematurely, the social network would add a label to the post noting that not all of the results were known yet and include a link to the official counts.

However it appears that Facebook limited that policy only to posts by candidates and their campaigns. At least some posts by other individuals that declared premature victories in specific states were not flagged.

Twitter was a little more proactive. Based on its “civic integrity policy,” which was introduced last month, the site announced it would label and reduce the visibility of tweets that contained “false or misleading information about civic processes” and provide more context. As a result, it flagged Trump’s tweets in which he declared overall victory while votes were still being counted, as well as premature claims by him and others of victory in individual states.

The action taken on Wednesday by Twitter and Facebook is a step in the right direction, said Jennifer Grygiel, a professor at Syracuse University and a social-media expert. However it was not very effective, particularly in the case of Twitter, because posts by major public figures gain almost instant traction, she added.

So even though Twitter added warnings to Trump’s tweets, by the time the labels were applied several minutes had passed and the misinformation was already spreading. In one case, it took more than 15 minutes for a warning to be added to a Trump tweet in which he falsely claimed that vote counters were “working hard” to make his lead in Pennsylvania “disappear.”

“Twitter can’t really enforce policies if they don’t do it before it happens, in the case of the president,” Grygiel said. “When a tweet hits the wire, essentially it goes public. It already brings this full force of impact of market reaction.”

She suggested that Twitter should consider moderating posts from for prominent figures such as Trump by delaying publication until they are checked by a moderator who can decide whether it needs a label. That would make it more difficult to spread unflagged misinformation, especially during important events such as an election.

This is less of an issue on Facebook or YouTube, where people are less likely to interact with posts in real time. Videos on YouTube might, however, become more of an issue in the days ahead, Grygiel said, if footage of Trump making false claims is shared by users who are analyzing the election.

“Generally, platforms have policies in place that are an attempt to do something — but at the end of the day they proved to be pretty ineffective,” she added. “The president felt empowered to make the claims.”