LONDON: Facebook, Twitter, Instagram, YouTube and TikTok failed to remove nearly 90 percent of anti-Muslim and Islamophobic content on their platforms, according to new research published on Thursday.
The study, led by the Center for Countering Digital Hate, looked at more than 530 posts, viewed 25 million times, that contained dehumanizing content about Muslims and Islam.
“Much of the hateful content we uncovered was blatant and easy to find, with even overtly Islamophobic hashtags circulating openly and hundreds of thousands of users belonging to groups dedicated to preaching anti-Muslim hatred,” said Imran Ahmed, the chief executive of the CCDH.
The messages were not limited to offensive opinions but also included caricatures, false claims and conspiracy theories. Some Instagram posts, for example, depicted Muslims as pigs and called for their expulsion from Europe.
Another social media post likened Islam to a cancer that should be “treated with radiation” and was accompanied by an image of an atomic blast. Messages on Twitter suggested that Muslim migration was part of a plot to change the politics of other countries. Many of the posts were accompanied by offensive hashtags such as #deathtoislam, #islamiscancer and #raghead.
The CCDH said that most of the hateful posts and Islamophobic content it monitored for the study was reported by users to the platforms’ community standards watchdogs. However, few were removed. Facebook, for example, took action on only seven out of 125 reported posts; Instagram on 32 out of 227 posts; TikTok on 18 out 50 posts; Twitter on three out of 105 posts; and YouTube failed to do anything about any of 23 videos it received complaints about.
Researchers also found that Facebook was being used by Islamophobic groups with names such as “Islam means Terrorism,” “Stop Islamization of America” and “Boycott Halal Certification in Australia.” Many of the groups, based predominantly in the UK, US and Australia, have thousands of members.
“Fight Against Liberalism, Socialism and Islam,” for example, has almost 5,000 members. The group is run by South African lawyer Mark Taitz. It claims that “moderate Islam does not exist and too many people fail to understand this,” and encourages Facebook users to “join our group to learn about Islam and the atrocities it is committing in ‘God’s name.’”
In response to the study, Twitter said it “does not tolerate the abuse or harassment of people on the basis of religion” and highlighted the automated system it uses to flag content that violates its policies. It did not address any of the specific findings of the report but the company did admit that it “knows there is still work to be done.”
This is not the first time that social media platforms have been criticized over their responses to hate speech and offensive content. In December, for example, a report by the Institute for Strategic Dialogue, a think tank that tracks online extremism, found that Facebook failed to remove extremist content. A new tool introduced the platform in November even tagged photos of beheadings and violent hate speech by Daesh and the Taliban as “insightful” and “engaging.”