Twitter shuts more than 200,000 Chinese accounts targeting Hong Kong protests

Protesters walk along a street during a rally in Hong Kong on August 18, 2019, in the latest opposition to a planned extradition law that has since morphed into a wider call for democratic rights in the semi-autonomous city. (AFP / Manan Vatsyayana)
Updated 20 August 2019
0

Twitter shuts more than 200,000 Chinese accounts targeting Hong Kong protests

  • Twitter traced the Hong Kong campaign to two fake Chinese and English Twitter accounts that pretended to be news organizations based in Hong Kong
  • An additional 936 core accounts Twitter believes originated from within China attempted to sow political discord in Hong Kong

WASHINGTON: Twitter said Monday it has suspended more than 200,000 accounts that it believes were part of a Chinese government influence campaign targeting the protest movement in Hong Kong.
The company also said it will ban ads from state-backed media companies, expanding a prohibition it first applied in 2017 to two Russian entities.
Both measures are part of what a senior company official portrayed in an interview as a broader effort to curb malicious political activity on a popular platform that has been criticized for enabling election interference around the world and for accepting money for ads that amount to propaganda by state-run media organizations.
The accounts were suspended for violating the social networking platform’s terms of service and “because we think this is not how people can come to Twitter to get informed,” the official said in an interview with The Associated Press.
The official, who spoke on condition of anonymity because of security concerns, said the Chinese activity was reported to the FBI, which investigated Russian efforts to interfere in the 2016 US presidential election through social media.
After being notified by Twitter and conducting its own investigation, Facebook said Monday that it has also removed seven pages, three groups and five accounts, including some portraying protesters as cockroaches and terrorists.
Facebook, which is more widely used in Hong Kong, does not release the data on such state-backed influence operations.
Twitter traced the Hong Kong campaign to two fake Chinese and English Twitter accounts that pretended to be news organizations based in Hong Kong, where pro-democracy demonstrators have taken to the streets since early June calling for full democracy and an inquiry into what they say is police violence against protesters.
Though Twitter is banned in China, it is available in Hong Kong, a semi-autonomous region.
The Chinese language account, @HKpoliticalnew, and the English account, @ctcc507, pushed tweets depicting protesters as violent criminals in a campaign aimed at influencing public opinion around the world. One of those accounts was tied to a suspended Facebook account that went by the same moniker: HKpoliticalnew.
An additional 936 core accounts Twitter believes originated from within China attempted to sow political discord in Hong Kong by undermining the protest movement’s legitimacy and political positions.
About 200,000 more automated Twitter accounts amplified the messages, engaging with the core accounts in the network. Few tweeted more than once, the official said, mostly because Twitter quickly caught many of them.
The Twitter official said the investigation remains ongoing and there could be further disclosures.
The Twitter campaign reflects the fact that the Chinese government has studied the role of social media in mass movements and fears the Hong Kong protests could spark wider unrest, said James Lewis at the Center for Strategic and International Studies.
“This is standard Chinese practice domestically, and we know that after 2016 they studied what the Russians did in the US carefully,” Lewis said. “So it sounds like this is the first time they’re deploying their new toy.”
Twitter has sought to more aggressively monitor its network for malicious political activity since the 2016 presidential election and to be more transparent about its investigations, publicly releasing such data about state-backed influence operations since October so others can evaluate it, the official said.
“We’re not only telling the public this happened, we’re also putting the data out there so people can study it for themselves,” the official said.
As for state-backed media organizations, they are still allowed to use Twitter, but are no longer allowed to pay for ads, which show up regardless of whether you have elected to follow the group’s tweet.
Twitter declined to provide a list of what it considers state-backed media organizations, but a representative said it may consider doing so in the future. In 2017, Twitter specifically announced it would ban Russia-based RT and Sputnik from advertising on its platform.


Facebook still auto-generating Daesh, Al-Qaeda pages

Updated 19 September 2019

Facebook still auto-generating Daesh, Al-Qaeda pages

  • Facebook has been working to limit the spread of extremist material on its service, so far with mixed success
  • But as the report shows, plenty of material gets through the cracks — and gets auto-generated

WASHINGTON: In the face of criticism that Facebook is not doing enough to combat extremist messaging, the company likes to say that its automated systems remove the vast majority of prohibited content glorifying the Daesh group and Al-Qaeda before it’s reported.
But a whistleblower’s complaint shows that Facebook itself has inadvertently provided the two extremist groups with a networking and recruitment tool by producing dozens of pages in their names.
The social networking company appears to have made little progress on the issue in the four months since The Associated Press detailed how pages that Facebook auto-generates for businesses are aiding Middle East extremists and white supremacists in the United States.
On Wednesday, US senators on the Committee on Commerce, Science, and Transportation questioned representatives from social media companies, including Monika Bickert, who heads Facebook’s efforts to stem extremist messaging. Bickert did not address Facebook’s auto-generation during the hearing, but faced some skepticism that the company’s efforts were effectively countering extremists.
The new details come from an update of a complaint to the Securities and Exchange Commission that the National Whistleblower Center plans to file this week. The filing obtained by the AP identifies almost 200 auto-generated pages — some for businesses, others for schools or other categories — that directly reference the Daesh group and dozens more representing Al-Qaeda and other known groups. One page listed as a “political ideology” is titled “I love Islamic state.” It features an IS logo inside the outlines of Facebook’s famous thumbs-up icon.
In response to a request for comment, a Facebook spokesperson told the AP: “Our priority is detecting and removing content posted by people that violates our policy against dangerous individuals and organizations to stay ahead of bad actors. Auto-generated pages are not like normal Facebook pages as people can’t comment or post on them and we remove any that violate our policies. While we cannot catch every one, we remain vigilant in this effort.”

“Yet those very same algorithms are auto-generating pages with titles like ‘I Love Islamic State,’ which are ideal for terrorists to use for networking and recruiting.”

John Kostyack, executive director of the National Whistleblower Center

Facebook has a number of functions that auto-generate pages from content posted by users. The updated complaint scrutinizes one function that is meant to help business networking. It scrapes employment information from users’ pages to create pages for businesses. In this case, it may be helping the extremist groups because it allows users to like the pages, potentially providing a list of sympathizers for recruiters.
The new filing also found that users’ pages promoting extremist groups remain easy to find with simple searches using their names. They uncovered one page for “Mohammed Atta” with an iconic photo of one of the Al-Qaeda adherents, who was a hijacker in the Sept. 11 attacks. The page lists the user’s work as “Al Qaidah” and education as “University Master Bin Laden” and “School Terrorist Afghanistan.”
Facebook has been working to limit the spread of extremist material on its service, so far with mixed success. In March, it expanded its definition of prohibited content to include US white nationalist and white separatist material as well as that from international extremist groups. It says it has banned 200 white supremacist organizations and 26 million pieces of content related to global extremist groups like IS and Al-Qaeda.
It also expanded its definition of terrorism to include not just acts of violence intended to achieve a political or ideological aim, but also attempts at violence, especially when aimed at civilians with the intent to coerce and intimidate. It’s unclear, though, how well enforcement works if the company is still having trouble ridding its platform of well-known extremist organizations’ supporters.
But as the report shows, plenty of material gets through the cracks — and gets auto-generated.
The AP story in May highlighted the auto-generation problem, but the new content identified in the report suggests that Facebook has not solved it.
The report also says that researchers found that many of the pages referenced in the AP report were removed more than six weeks later on June 25, the day before Bickert was questioned for another congressional hearing.
The issue was flagged in the initial SEC complaint filed by the center’s executive director, John Kostyack, which alleges the social media company has exaggerated its success combatting extremist messaging.
“Facebook would like us to believe that its magical algorithms are somehow scrubbing its website of extremist content,” Kostyack said. “Yet those very same algorithms are auto-generating pages with titles like ‘I Love Islamic State,’ which are ideal for terrorists to use for networking and recruiting.”