Time is running out for extremists on social media

1 / 2
At the moment, if the police need to get involved, it is the public purse paying for a problem created by Facebook. – Fiyaz Mughal, Founder of TellMAMA
2 / 2
The British parliament is considering a law that would force online platforms to ban extremist content while the EU wants to fine sites if they fail to remove content within an hour. (Reuters)
Updated 16 September 2018
0

Time is running out for extremists on social media

  • The UK and EU are planning laws to hold Facebook and Twitter accountable for those who hide behind ‘secret’ groups
  • Social media companies fail to act even when online abuse spills over into real life

LONDON: They are the new autocrats. Social media sites such as Facebook, Twitter and Google rule the world, unhindered by borders and apparently unhampered by regulation. But not, perhaps, for much longer.
The British parliament is to consider introducing a law that would force online platforms to remove and ban extremist content of any nature, whether racist, xenophobic, Islamophobic or sexist. And in Brussels, the EU wants to fine online sites if they fail to remove “illegal and extremist” content within one hour.
But campaigners say such measures — well-intentioned as they are — are doomed to failure.
“The EU directive will change little. Less than 1 percent of the material out there is illegal because the staff on Google remove it automatically anyway. The other 99 percent is not illegal, even if it is extremist in nature,” said Fiyaz Mughal, founder of TellMAMA (Measuring Anti-Muslim Attacks). “This is just political bamboozling.”
In her proposed Online Forums Bill, British member of parliament Lucy Powell aims to “tackle online hate, fake news and radicalization” by a) making moderators and administrators of social media platforms legally responsible for what appears on their sites; and b) making public the name of every secret Facebook group and how many members it has.
Her bill has broad appeal. Powell, a member of the opposition Labour party, has garnered strong support from political opponents in the Conservative party.
She is especially concerned with how Facebook, the world’s most popular social network with 1.8 billion active users, circulates extremist material and opinions through secret or closed groups, where membership is by invitation only. “Social media has given extremists a new tool with which to recruit and radicalize,” she said. “It is something we are frighteningly unequipped to deal with.”
“Worryingly, it is on Facebook, which most of us in Britain use, where people are being exposed to extremist material. Instead of small meetings or obscure websites in the darkest corners of the Internet, our favorite social media site is increasingly where hate is cultivated. Extremist views go unchallenged. Unacceptable language is treated as the norm. There are no societal norms in the dark crevices of the the online world.”
Fiyaz Mughal of TellMAMA agrees; he said he has made the same argument countless times to officials. “I’ve been called in by politicians and senior civil servants dozens of times, both at the Home Office (interior ministry) and the Ministry of Culture, Media and Sport, and what you get is a lot of hand-wringing and talk about free speech,” he said. “The English Defense League, a far-right group, regularly post material which is racist and Islamophobic — extremist — but it is not seen as illegal.”
Closed Facebook groups may disguise their true nature, he said. “There is a group supposedly for atheists which is in reality a forum for the far right. How do we know? Because they sprinkle insignia associated with the (far-right) English Defense League all over the site. That closed group reaches 50,000 to 60,000 people, spreading hate against Muslims. Yet Facebook rejected our complaint, saying it did not contravene their standards.”
Social media companies fail to act even when online abuse spills over into real life, said Mughal. “A Muslim woman got some abuse for something she had said and told the person to go away and leave her alone. The man who posted the abusive comment then turned up at the woman’s workplace and took photos, which he sent to her. It was intimidation, to show her he could get to her.
“The police took action and were willing to arrest the man, but the woman said that would inflame the situation and asked for him to be cautioned only. I reported the incident to Facebook, but it took them four or five months to respond, and even then it was to say it did not contravene their standards. This was clearly a case of harassment and cyber-bullying, which are offenses. They clearly have no understanding of the law so who exactly is setting those standards for Facebook and all the rest?”
In Brussels, the EU has also run out of patience. Back in March, Internet firms were given three months to show they were acting more speedily to keep extremist material off their sites.
But their efforts have failed to impress. In his State of the Union address to the European parliament on Wednesday, EU chief executive Jean-Claude Juncker said that only legislation would force the companies to do the right thing. Under the proposed EU directive, they must within an hour take down any content that incites or advocates extremist offenses or shows how to commit such offenses or promotes extremist groups. If they miss that deadline they will face hefty fines of up to 4 percent of their annual global turnover, although they will also have the right to challenge removal orders. “One hour is the decisive time window (during which) the greatest damage takes place,” Juncker said.
Mughal agreed that “the greatest dissemination of hate” happens in the first hour after posting. But the directive must be accepted by all 28 EU member states and also requires each country to put in place the capacity to identify extremist content online. “But what if the different states have different ideas about what constitutes extremist or hate speech? There is no single set of laws on this and the EU’s snail-like pace in tackling the far right is hardly encouraging.”
There are solutions, he added. One is to re-classify all social media firms as publishers. “Publishers already bear responsibilities under the law, which means they can be taken to court and made to change. Or there should be an independent arbitrator with the power to impose fines. At the moment, if the police need to get involved, it is the public purse paying for a problem created by Facebook.”
In a statement, Facebook defended secret groups as places where people could come together “in a safe way to discuss sensitive issues which might otherwise put them at risk in their society.”
The statement went on: “Like all parts of Facebook, people in these groups must adhere to our Community Standards, which lay out what is and isn’t allowed on our service. These include strict rules around hate speech, harassment, bullying and terrorist and extremist content. When people break these rules, including in secret groups, we take action.”
The company has invested in security to detect problem content “without anyone needing to report it,” and of the 2.5 million pieces of hate speech removed from Facebook since January, 38 percent were “proactively flagged” by Facebook before anyone reported it.
On the EU directive, Facebook said: ”There is no place for terrorism on Facebook, and we share the goal of the European Commission to fight it and believe that it is only through a common effort across companies, civil society and institutions that results can be achieved. We’ve made significant strides finding and removing terrorist propaganda quickly and at scale, but we know we can do more.”
Twitter and Google did not respond to requests for comment.


Twitter warns global users their tweets violate Pakistani law

Updated 39 min 52 sec ago
0

Twitter warns global users their tweets violate Pakistani law

  • Pakistan has previously threatened to block Twitter if the company did not remove content its government found offensive
  • Pakistan banned Facebook for hosting allegedly blasphemous content for two weeks in 2010 while YouTube was unavailable from 2012 to 2016 over an amateur film about the Prophet Muhammad that led to global riots

WASHINGTON: When Canadian columnist Anthony Furey received an email said to be from Twitter’s legal team telling him he may have broken a slew of Pakistani laws, his first instinct was to dismiss it as spam.
But after Googling the relevant sections of Pakistan’s penal code, the Toronto Sun op-ed editor was startled to learn he stood accused of insulting the Prophet Muhammad — a crime punishable by death in the Islamic republic — and Twitter later confirmed the correspondence was genuine.
His perceived offense was to post cartoons of the prophet several years ago.
Furey and two prominent critics of extremism in Islam say they are “shocked” to have received notices by the social media giant this past week over alleged violations of Islamabad’s laws, despite having no apparent connection to the South Asian country.
They say the notices amount to an effort to stifle their voices — a charge Twitter denies, arguing the notices came about as a result of “valid requests from an authorized entity,” understood to mean Pakistan, helped users “to take measures to protect their interests,” and the process is not unique to any one country.
But Furey is the third prominent user in the space of days to publicly complain about receiving a message linked to Pakistan.
The other two are Saudi-Canadian activist Ensaf Haidar and Imam Mohammad Tawhidi, a progressive Muslim scholar from Australia who was born in Iran.
Both are outspoken critics of religious extremism and have accused the social media giant of helping to silence progressive ideas within Islam.
Furey, who detailed his experience in a column for his newspaper on Saturday, told AFP: “I’m somewhat alarmed that Twitter would even allow a country to make a complaint like this, as it almost validates their absurd blasphemy laws.”
The tweet in question was a collage of cartoons of Mohammad that he posted four years ago.
“Looking back, I remember I did it right after there had been an Daesh-inspired attack in retaliation over the cartoons,” Furey wrote in his column, adding he had not posted similar material before or since.
Tawhidi meanwhile was sent a similar notice flagging a tweet that called on Australian police to investigate extremism in mosques following a deadly knife attack in Melbourne in November.
The scholar attached the legal notice sent to him by Twitter informing him of possible violations of Pakistani law, and tweeted: “I am not from Pakistan nor am I a Pakistani citizen.
“Pakistan has no authority over what I say. Get out of here.”
Reached for comment, a spokesperson for Twitter told AFP: “In our continuing effort to make our services available to people everywhere, if we receive a valid requests from an authorized entity, it may be necessary to withhold access to certain content in a particular country from time to time.”
The spokesperson added: “We notify users so that they have the opportunity to review the legal request, and the option to take measures to protect their interests.”
Pakistan has previously threatened to block Twitter if the company did not remove content its government found offensive.
It banned Facebook for hosting allegedly blasphemous content for two weeks in 2010 while YouTube was unavailable from 2012 to 2016 over an amateur film about the Prophet Muhammad that led to global riots.
Furey told AFP that although he was taken aback by the notice, “I’m at least glad they brought it to my attention that the Pakistan government has their eye on me.”
But he added: “One troubling consequence to all of this is that even people in countries without these blasphemy laws may start to self-censor for fear of the reach foreign governments will have over them in the online world.”