Time is running out for extremists on social media

1 / 2
At the moment, if the police need to get involved, it is the public purse paying for a problem created by Facebook. – Fiyaz Mughal, Founder of TellMAMA
2 / 2
The British parliament is considering a law that would force online platforms to ban extremist content while the EU wants to fine sites if they fail to remove content within an hour. (Reuters)
Updated 16 September 2018
0

Time is running out for extremists on social media

  • The UK and EU are planning laws to hold Facebook and Twitter accountable for those who hide behind ‘secret’ groups
  • Social media companies fail to act even when online abuse spills over into real life

LONDON: They are the new autocrats. Social media sites such as Facebook, Twitter and Google rule the world, unhindered by borders and apparently unhampered by regulation. But not, perhaps, for much longer.
The British parliament is to consider introducing a law that would force online platforms to remove and ban extremist content of any nature, whether racist, xenophobic, Islamophobic or sexist. And in Brussels, the EU wants to fine online sites if they fail to remove “illegal and extremist” content within one hour.
But campaigners say such measures — well-intentioned as they are — are doomed to failure.
“The EU directive will change little. Less than 1 percent of the material out there is illegal because the staff on Google remove it automatically anyway. The other 99 percent is not illegal, even if it is extremist in nature,” said Fiyaz Mughal, founder of TellMAMA (Measuring Anti-Muslim Attacks). “This is just political bamboozling.”
In her proposed Online Forums Bill, British member of parliament Lucy Powell aims to “tackle online hate, fake news and radicalization” by a) making moderators and administrators of social media platforms legally responsible for what appears on their sites; and b) making public the name of every secret Facebook group and how many members it has.
Her bill has broad appeal. Powell, a member of the opposition Labour party, has garnered strong support from political opponents in the Conservative party.
She is especially concerned with how Facebook, the world’s most popular social network with 1.8 billion active users, circulates extremist material and opinions through secret or closed groups, where membership is by invitation only. “Social media has given extremists a new tool with which to recruit and radicalize,” she said. “It is something we are frighteningly unequipped to deal with.”
“Worryingly, it is on Facebook, which most of us in Britain use, where people are being exposed to extremist material. Instead of small meetings or obscure websites in the darkest corners of the Internet, our favorite social media site is increasingly where hate is cultivated. Extremist views go unchallenged. Unacceptable language is treated as the norm. There are no societal norms in the dark crevices of the the online world.”
Fiyaz Mughal of TellMAMA agrees; he said he has made the same argument countless times to officials. “I’ve been called in by politicians and senior civil servants dozens of times, both at the Home Office (interior ministry) and the Ministry of Culture, Media and Sport, and what you get is a lot of hand-wringing and talk about free speech,” he said. “The English Defense League, a far-right group, regularly post material which is racist and Islamophobic — extremist — but it is not seen as illegal.”
Closed Facebook groups may disguise their true nature, he said. “There is a group supposedly for atheists which is in reality a forum for the far right. How do we know? Because they sprinkle insignia associated with the (far-right) English Defense League all over the site. That closed group reaches 50,000 to 60,000 people, spreading hate against Muslims. Yet Facebook rejected our complaint, saying it did not contravene their standards.”
Social media companies fail to act even when online abuse spills over into real life, said Mughal. “A Muslim woman got some abuse for something she had said and told the person to go away and leave her alone. The man who posted the abusive comment then turned up at the woman’s workplace and took photos, which he sent to her. It was intimidation, to show her he could get to her.
“The police took action and were willing to arrest the man, but the woman said that would inflame the situation and asked for him to be cautioned only. I reported the incident to Facebook, but it took them four or five months to respond, and even then it was to say it did not contravene their standards. This was clearly a case of harassment and cyber-bullying, which are offenses. They clearly have no understanding of the law so who exactly is setting those standards for Facebook and all the rest?”
In Brussels, the EU has also run out of patience. Back in March, Internet firms were given three months to show they were acting more speedily to keep extremist material off their sites.
But their efforts have failed to impress. In his State of the Union address to the European parliament on Wednesday, EU chief executive Jean-Claude Juncker said that only legislation would force the companies to do the right thing. Under the proposed EU directive, they must within an hour take down any content that incites or advocates extremist offenses or shows how to commit such offenses or promotes extremist groups. If they miss that deadline they will face hefty fines of up to 4 percent of their annual global turnover, although they will also have the right to challenge removal orders. “One hour is the decisive time window (during which) the greatest damage takes place,” Juncker said.
Mughal agreed that “the greatest dissemination of hate” happens in the first hour after posting. But the directive must be accepted by all 28 EU member states and also requires each country to put in place the capacity to identify extremist content online. “But what if the different states have different ideas about what constitutes extremist or hate speech? There is no single set of laws on this and the EU’s snail-like pace in tackling the far right is hardly encouraging.”
There are solutions, he added. One is to re-classify all social media firms as publishers. “Publishers already bear responsibilities under the law, which means they can be taken to court and made to change. Or there should be an independent arbitrator with the power to impose fines. At the moment, if the police need to get involved, it is the public purse paying for a problem created by Facebook.”
In a statement, Facebook defended secret groups as places where people could come together “in a safe way to discuss sensitive issues which might otherwise put them at risk in their society.”
The statement went on: “Like all parts of Facebook, people in these groups must adhere to our Community Standards, which lay out what is and isn’t allowed on our service. These include strict rules around hate speech, harassment, bullying and terrorist and extremist content. When people break these rules, including in secret groups, we take action.”
The company has invested in security to detect problem content “without anyone needing to report it,” and of the 2.5 million pieces of hate speech removed from Facebook since January, 38 percent were “proactively flagged” by Facebook before anyone reported it.
On the EU directive, Facebook said: ”There is no place for terrorism on Facebook, and we share the goal of the European Commission to fight it and believe that it is only through a common effort across companies, civil society and institutions that results can be achieved. We’ve made significant strides finding and removing terrorist propaganda quickly and at scale, but we know we can do more.”
Twitter and Google did not respond to requests for comment.


Nestle, AT&T pull YouTube ads over pedophile concerns

Updated 22 February 2019
0

Nestle, AT&T pull YouTube ads over pedophile concerns

  • A video from a popular YouTuber and a report from Wired showed that pedophiles have made unseemly comments on innocuous videos of kids
  • YouTube has faced advertiser boycotts in the past, including a widespread boycott in early 2017

SAN FRANCISCO, US: Several companies, including AT&T and Nestle, are pulling advertisements from YouTube over concerns about inappropriate comments on videos of children.
A video from a popular YouTuber and a report from Wired showed that pedophiles have made unseemly comments on innocuous videos of kids. The comments reportedly included timestamps that showed where kids innocently bared body parts.
YouTube says it disabled comments on tens of millions of videos and deleted offending accounts and channels.
Nestle and Fortnite maker Epic Games say they paused ads on YouTube while the company works on the issue. AT&T says it has removed ads until YouTube can “protect our brand from offensive content of any kind.”
YouTube has faced advertiser boycotts in the past, including a widespread boycott in early 2017. Since then YouTube has made efforts to be more transparent about how it deals with offensive comments and videos on its site.
But the latest flap shows how much of an ongoing problem offensive content continues to be, said eMarketer video analyst Paul Verna.
“When you think about the scope of that platform and what they’re up against, it is really like a game of whack-a-mole to try to prevent these problems from happening,” he said.
Still, because of the powerful advertising reach of YouTube’s parent Google, brands are unlikely to stay away from YouTube for long, he said.
Digital ad spending in the US is expected to grow 19 percent in 2019 to $129.34 billion this year, or 54 percent of estimated total US ad spending, according to eMarketer, with Google and Facebook accounting for nearly 60 percent of that total.
“At the end of the day, there’s a duopoly out there of Google and Facebook,” for digital advertising, he said. “Any brand that doesn’t play the game with either is potentially leaving a big marketing opportunity on the table.”