Social media sites, trolls face legal action over NZ terror video

A police officer stands guard outside Al Noor mosque in Christchurch, New Zealand last week. Legal action against social media companies, Internet users and trolls is gathering pace over the online reaction to the video. (Reuters)
Updated 25 March 2019

Social media sites, trolls face legal action over NZ terror video

  • French group sues Facebook and YouTube for allowing live-stream of mosque attack
  • Few hours after attack, footage could still be found on Facebook as well as YouTube

LONDON: Legal action against social media companies, Internet users and trolls is gathering pace over the online reaction to this month’s New Zealand mosque massacre, in which 50 people were killed. 

Brenton Tarrant, who has been charged with murder after the terrorist attack on two Christchurch mosques and who live-streamed the attack on social media, is set to appear in court on April 5 where he will face a host of additional charges. Social media companies, people who shared Tarrant’s violent video, and those who posted offensive comments online have all faced legal challenges following the terror attack. 

The French Council of the Muslim Faith (CFCM), one of the main groups representing Muslims in France, said on Monday it was suing Facebook and YouTube, Reuters reported.

The group accused the social-media giants of inciting violence by allowing the streaming of footage of the New Zealand massacre on their platforms.

It alleged that the companies had disseminated material that encouraged terrorism, and harmed the dignity of human beings. 

A YouTube spokesperson said that it has “removed tens of thousands of videos and terminated hundreds of accounts created to promote or glorify the shooter” since the attack. 

“Our teams are continuing to work around the clock to prevent violent and graphic content from spreading, we know there is much more work to do,” they added. 

A Facebook representative did not immediately respond to a request for comment when contacted by Arab News. The service said earlier that in the first 24 hours after the shooting, it blocked more than 1.2 million attempts to upload the video and removed a further 300,000 copies that had been uploaded.

But a few hours after the attack, footage could still be found on Facebook as well as YouTube, and both platforms have faced widespead criticism over the footage. 

Abdallah Zekri, president of the CFCM’s Islamophobia monitoring unit, said the organization had launched a formal legal complaint against Facebook and YouTube in France, Reuters reported on Monday. 

The council said it was suing the French branches of the two tech giants for “broadcasting a message with violent content abetting terrorism, or of a nature likely to seriously violate human dignity and liable to be seen by a minor,” according to a copy of the complaint seen by AFP.

Such acts can be punished by three years’ imprisonment and a €75,000 ($85,000) fine under French law, it was reported.

Other action has also been taken against online trolls who made offensive comments about the terror video — as well as those sharing it. 

In the UK, seven people were arrested for hate crimes in the Greater Manchester area over the mosque shootings, with one man having called the gunman a “hero,” the BBC reported. The local police service said it had received 11 reports of offensive behavior related to the attack, with nine of them online.

New Zealand’s legal right of freedom of expression comes with tighter restrictions than in many other countries, meaning people could face legal action for seeking out and watching the video.

As of March 21, at least two people in New Zealand had been charged with sharing the 17-minute video on social media platforms under a law forbidding “possession or dissemination of material depicting extreme violence and terrorism.” More people could be charged for publicizing the attack under a human rights law which bans “incitement of racial disharmony.”

Philip Neville Arps appeared in court in Christchurch on Mar. 20 on two charges related to reposting the video, while an unnamed Christchurch teenager was also denied bail after arrest for posting a photograph of one of the mosques where the attack took place a week beforehand, with a caption that read: “Target acquired.”

Both face a maximum of 14 years in prison if they are found guilty, according to The New York Times. 

Even those commenting on the attack who are suspected of inciting racial disharmony can be charged. A woman from northern New Zealand was arrested for Facebook comments she made after the attack and if charged and convicted could face a fine of NZ$7,000 ($4,800).

New Zealand’s chief censor, David Shanks, acknowledged that many people may have viewed the Christchurch mosque video by accident. He warned that, while those who spread the video risked arrest and criminal charges, even possessing the video unintentionally was a crime.

“Every New Zealander should now be clear that this clip is an illegal, harmful and reprehensible record created to promote a terrorist cause,” Shanks said last week. 

“If you have a record of it, you must delete it. If you see it, you should report it. Possessing or distributing it is illegal, and only supports a criminal agenda,” he added.

The social media platforms hosting the video could also now face legal issues in New Zealand, with Prime Minister Jacinda Ardern having said that her country, with possible assistance from others, will investigate the role social media played in the attack.

“We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published,” she told New Zealand’s Parliament last week. “They are the publisher, not just the postman.”


Google tightens political ads policy to thwart abuse

Updated 21 November 2019

Google tightens political ads policy to thwart abuse

  • The Internet company said its rules already ban any advertiser, including those with political messages, to lie

SAN FRANCISCO: Alphabet Inc’s Google will stop giving advertisers the ability to target election ads using data such as public voter records and general political affiliations, the company said in a blog post on Wednesday.
The move comes at a time when social media platforms are under pressure over their handling of political advertising ahead of the US presidential election in 2020.
Google said it would limit audience targeting for election ads to age, gender and general location at a postal code level. Political advertisers also can still contextually target, such as serving ads to people reading about a certain topic.
Previously, verified political advertisers could also target ads using data gleaned from users’ behavior, such as search actions, that categorized them as left-leaning, right-leaning or independent. They could also upload data such as voter file lists to target ads to a lookalike audience which exhibited similar behaviors to those in the data.
Google will enforce the new approach in the United Kingdom within a week, ahead of the country’s general election on Dec. 12. It said it would enforce it in the European Union by the end of the year and in the rest of the world starting on Jan. 6, 2020.
“Given recent concerns and debates about political advertising, and the importance of shared trust in the democratic process, we want to improve voters’ confidence in the political ads they may see on our ad platforms,” Scott Spencer, vice president of product management for Google Ads, said in the blog post.
Google is the top seller of online ads in the United States, but smaller rivals with fewer targeting restrictions may now attract more business from campaigns, one political ad buyer, speaking on condition of anonymity, told Reuters on Wednesday.
Google added examples to its misrepresentation policy to show that it would not allow false claims about election results or the eligibility of political candidates based on age or birthplace.
Last month, Google refused to remove an ad run by President Donald Trump’s re-election campaign on its YouTube video-streaming service that Democratic presidential hopeful Joe Biden’s campaign said contained false claims, because it did not violate the policy.
A Google spokeswoman told Reuters on Wednesday that the video would still be allowed under the latest policy.
Social media giant Facebook Inc. has been criticized by lawmakers and regulators over its decision to not fact-check ads run by politicians on its platform, while Twitter has decided to ban political ads.
Google also clarified that its policies for political and nonpolitical ads prohibit doctored and manipulated media.
On Dec. 3, the company will expand its ad transparency efforts to ads related to state-level elections, including them in an online database created to catalog political advertising.