TikTok’s lead EU regulator opens two data privacy probes

Ireland’s data watchdog earlier this month levied a record 225 million euro fine on Facebook’s WhatsApp. (File/AFP)
Ireland’s data watchdog earlier this month levied a record 225 million euro fine on Facebook’s WhatsApp. (File/AFP)
Short Url
Updated 15 September 2021

TikTok’s lead EU regulator opens two data privacy probes

Ireland’s data watchdog earlier this month levied a record 225 million euro fine on Facebook’s WhatsApp. (File/AFP)
  • EU privacy regulator opened two inquiries into how TikTok processes children’s personal data

DUBLIN: TikTok’s lead data privacy regulator in the European Union has opened two inquiries into the Chinese-owned short-video platform related to the processing of children’s personal data and transfers of personal data to China.
Ireland’s Data Protection Commission, which is lead EU regulator for many of the world’s top Internet firms due to the location of their regional headquarters in Ireland, is allowed to impose fines of up to 4 percent of global revenue.
TikTok in August announced stricter privacy controls for teenagers, seeking to address criticism that it has failed to protect children from hidden advertising and inappropriate content.
Owned by China’s ByteDance, TikTok has grown rapidly around the world, particularly among teenagers.
The first of the probes relates “to the processing of personal data in the context of platform settings for users under age 18 and age verification measures for persons under 13,” the Data Protection Commission said in a statement.
The second probe will focus on transfers by TikTok of personal data to China and whether the company complies with EU data law in its transfers of personal data to countries outside the bloc, the statement said.
A spokesperson for TikTok said it had implemented extensive policies and controls to safeguard user data and relies on approved methods for data being transferred from Europe, such as standard contractual clauses.
“The privacy and safety of the TikTok community, particularly our youngest members, is our highest priority,” the spokesperson said.
Ireland’s data watchdog earlier this month levied a record 225 million euro ($265.64 million) fine on Facebook’s WhatsApp under the EU’s 2018 General Data Protection Regulation law (GDPR).
But the watchdog has faced criticism from other European regulators at the speed of its inquiries and the severeness of its sanctions.
The Irish regulator had 27 international inquiries in progress at the end of last year, including 14 into Facebook and its subsidiaries.


Meta’s Oversight Board issued 20 decisions in its first year. Is that enough?

An aerial view shows a newly unveiled logo for
An aerial view shows a newly unveiled logo for "Meta" in front of Facebook headquarters in Menlo Park on October 28, 2021. (AFP)
Updated 28 June 2022

Meta’s Oversight Board issued 20 decisions in its first year. Is that enough?

An aerial view shows a newly unveiled logo for "Meta" in front of Facebook headquarters in Menlo Park on October 28, 2021. (AFP)
  • The first annual report from the independent review body, which is funded by Meta, explains the reasoning behind its 20 rulings and the 86 recommendations it has made

DUBAI: Meta’s Oversight Board has published its first annual report. Covering the period from October 2020 to December 2021, it describes the work the board has carried out in relation to how Meta, the company formerly known as Facebook, treats its users and their content, and the work that remains to be done.

The board is an independent body set up and funded by Meta to review content and content-moderation policies on Facebook and Instagram. It considers concerns raised by Meta itself and by users who have exhausted the company’s internal appeals process. It can recommend policy changes and make decisions that overrule the company’s decisions.

During the period covered by the report, the board received more than a million appeals, issued 20 decisions — 14 of which overturned Meta’s own rulings — and made 86 recommendations to the company.

“Through our first Annual Report, we’re able to demonstrate the significant impact the board has had on pushing Meta to become more transparent in its content policies and fairer in its content decisions,” Thomas Hughes, the board’s director, told Arab News.

One of the cases the board considered concerns a post that appeared on media organization Al Jazeera Arabic’s verified page in May 2021, and which was subsequently shared by a Facebook user in Egypt. It consisted of Arabic text and a photo showing two men, their faces covered, who were wearing camouflage and headbands featuring the insignia of the Palestinian Al-Qassam Brigades.

The text read: “The resistance leadership in the common room gives the occupation a respite until 6 p.m. to withdraw its soldiers from Al-Aqsa Mosque and Sheikh Jarrah neighborhood, otherwise he who warns is excused. Abu Ubaida – Al-Qassam Brigades military spokesman.”

The user who shared the post commented on it in Arabic by adding the word “ooh.”

Meta initially removed the post because Al-Qassam Brigades and its spokesperson, Abu Ubaida, are designated under Facebook’s Dangerous Individuals and Organizations community standard. However, it restored the post based on a ruling by the board.

The board said in its report that while the community standard policy clearly prohibits “channeling information or resources, including official communications, on behalf of a designated entity,” it also noted there is an exception to this rule for content that is published as “news reporting.” It added that the content in this case was a “reprint of a widely republished news report” by Al Jazeera and did not include any major changes other than the “addition of the non-substantive comment, ‘ooh.’”

Meta was unable to explain why two of its reviewers judged the content to be in violation of the platform’s content policies but noted that moderators are not required to record their reasoning for individual content decisions.

According to the report, the case also highlights the board’s objective of ensuring users are treated fairly because “the post, consisting of a republication of a news item from a legitimate outlet, was treated differently from content posted by the news organization itself.”

Based on allegations that Facebook was censoring Palestinian content, the board asked the platform a number of questions, including whether it had received any requests from Israel to remove content related to the 2021 Israeli-Palestinian conflict.

In response, Facebook said that it had not received any valid, legal requests from a government authority related to the user’s content in this case. However, it declined to provide any other requested information.

The board therefore recommended an independent review of these issues, as well as greater transparency about how Facebook responds to government requests.

“Following recommendations we issued after a case decision involving Israel/Palestine, Meta is conducting a review, using an independent body, to determine whether Facebook’s content-moderation community standards in Arabic and Hebrew are being applied without bias,” said Hughes.

In another case, the Oversight Board overturned Meta’s decision to remove an Instagram post by a public account that allows the discussion of queer narratives in Arabic culture. The post consisted of a series of pictures with a caption, in Arabic and English, explaining how each picture illustrated a different word that can be used in a derogatory way in the Arab world to describe men with “effeminate mannerisms.”

Meta removed the content for violating its hate speech policies but restored it when the user appealed. However, it later removed the content a second time for violating the same policies, after other users reported it.

According to the board, this was a “clear error, which was not in line with Meta’s hate speech policy.” It said that while the post does contain terms that are considered slurs, it is covered by an exception covering speech that is “used self-referentially or in an empowering way,” and also an exception that allows the quoting of hate speech to “condemn it or raise awareness.”

Each time the post was reported, a different moderator reviewed it. The board was, therefore, “concerned that reviewers may not have sufficient resources in terms of capacity or training to prevent the kind of mistake seen in this case.”

Hughes said: “As demonstrated in this report, we have a track record of success in getting Meta to consider how it handles posts in Arabic.

“We’ve succeeded in getting Meta to ensure its community standards are translated into all relevant languages, prioritizing regions where conflict or unrest puts users at most risk of imminent harm. Meta has also agreed to our call to ensure all updates to its policies are translated into all languages.”

These cases illustrate the board’s commitment to bringing about positive change, and to lobbying Meta to do the same, whether that means restoring an improperly deleted post or agreeing to an independent review of a case. But is this enough?

This month, Facebook failed once again when it faced a test of how capable it is of detecting obviously unacceptable violent hate speech. The test was carried out by nonprofit groups Global Witness and Foxglove, which created 12 text-based adverts which featured dehumanizing hate speech that called for the murder of people belonging to Ethiopia’s three main ethnic groups — the Amhara, the Oromo and the Tigrayans — and submitted them to the platform. Despite the clearly objectionable content, Facebook’s systems approved the adverts for publication.

In March, Global Witness ran a similar test using adverts about Myanmar that used similar hate speech. Facebook also failed to detect those. The ads were not actually published on Facebook because Global Witness alerted Meta to the test and the violations the platform had failed to detect.

In another case, the Oversight Board upheld Meta’s initial decision to remove a post alleging the involvement of ethnic Tigrayan civilians in atrocities carried out in the Amhara region of Ethiopia. However, Meta restored the post after a user appealed to the board, so the company had to once again remove the content from the platform.

In November 2021, Meta announced that it had removed a post by Ethiopia’s prime minister, Abiy Ahmed Ali, in which he urged citizens to rise up and “bury” rival Tigray forces who threatened the country’s capital. His verified Facebook page remains active, however, and has 4.1 million followers.

In addition to its failures over content relating to Myanmar and Ethiopia, Facebook has long been accused by rights activists of suppressing posts by Palestinians.

“Facebook has suppressed content posted by Palestinians and their supporters speaking out about human rights issues in Israel and Palestine,” said Deborah Brown, a senior digital rights researcher and advocate at Human Rights Watch.

During the May 2021 Israeli-Palestinian conflict, Facebook and Instagram removed content posted by Palestinians and posts that expressed support for Palestine. HRW documented several instances of this, including one in which Instagram removed a screenshot of the headlines and photos from three New York Times op-ed articles, to which the user had added a caption that urged Palestinians to “never concede” their rights.

In another instance, Instagram removed a post that included a picture of a building and the caption: “This is a photo of my family’s building before it was struck by Israeli missiles on Saturday, May 15, 2021. We have three apartments in this building.”

Digital rights group Sada Social said that in May 2021 alone it documented more than 700 examples of social media networks removing or restricting access to Palestinian content.

According to HRW, Meta’s acknowledgment of errors that were made and attempts to correct some of them are insufficient and do not address the scale and scope of reported content restrictions, nor do they adequately explain why they occurred in the first place.

Hughes acknowledged that some of the commitments to change made by Meta will take time to implement but added that it is important to ensure that they are “not kicked into the long grass and forgotten about.”

Meta admitted this year in its first Quarterly Update on the Oversight Board that it takes time to implement recommendations “because of the complexity and scale associated with changing how we explain and enforce our policies, and how we inform users of actions we’ve taken and what they can do about it.”

In the meantime, Hughes added: “The Board will continue to play a key role in the collective effort by companies, governments, academia and civil society to shape a brighter, safer digital future that will benefit people everywhere.”

However, the Oversight Board only reviews cases reported by users or by Meta itself. According to some experts, the issues with Meta go far beyond the current scope of the board’s mandate.

“For an oversight board to address these issues (Russian interference in the US elections), it would need jurisdiction not only over personal posts but also political ads,” wrote Dipayan Ghosh, co-director of the Digital Platforms and Democracy Project at the Mossavar-Rahmani Center for Business and Government at the Harvard Kennedy School.

“Beyond that, it would need to be able to not only take down specific pieces of content but also to halt the flow of American consumer data to Russian operatives and change the ways that algorithms privilege contentious content.”

He went on to suggest that the board’s authority should be expanded from content takedowns to include “more critical concerns” such as the company’s data practices and algorithmic decision-making because “no matter where we set the boundaries, Facebook will always want to push them. It knows no other way to maintain its profit margins.”


Delhi police arrest Muslim journalist over Twitter post

Mohammed Zubair. (Twitter @zoo_bear)
Mohammed Zubair. (Twitter @zoo_bear)
Updated 28 June 2022

Delhi police arrest Muslim journalist over Twitter post

Mohammed Zubair. (Twitter @zoo_bear)
  • Some local media reports linked Zubair’s arrest Monday to the recent controversy over incendiary remarks about Prophet Muhammad made by a BJP spokesperson, which sparked widespread global protests and outrage from the Islamic world

NEW DELHI: Indian police on Monday arrested the co-founder of a top fact-checking website who has been a vocal critic of Prime Minister Narendra Modi’s government, his colleague said.
Mohammed Zubair was arrested in Delhi after being called in for questioning in an earlier case, said Pratik Sinha, who runs the Alt-News web site together with Zubair.
Sinha said in a post on Twitter that his colleague was arrested illegally and without warning and was being held by police in Delhi.
Zubair has been one of the fiercest critics of Modi’s ruling Hindu nationalist Bharatiya Janata Party and has frequently called out hate speech by Hindu fringe groups on the Internet.
He has faced several legal cases over the years which his supporters dismiss as politically motivated attempts to silence a critic.
Some local media reports linked Zubair’s arrest Monday to the recent controversy over incendiary remarks about Prophet Muhammad made by a BJP spokesperson, which sparked widespread global protests and outrage from the Islamic world.
Many Hindu nationalists in the last few weeks have drawn attention to past comments on social media made by Zubair and other Modi critics and demanded that he be prosecuted for hurting their religious feelings.
Most government critics however see Zubair’s arrest as part of a crackdown on free-speech and rights activists that India has seen since Modi’s ascent to power in May 2014.
On Saturday, police detained activist Teesta Setalvad who hails from Modi’s western home state of Gujarat. Setalvad has been campaigning to have Modi declared complicit in deadly sectarian riots 20 years ago.
Protests were held in several Indian cities on Monday with rights activists and free-speech organizations demanding Setalvad’s release and describing her detention as “politics of vengeance.”


Emirates to air Shahid content exclusively on inflight entertainment system

Emirates to air Shahid content exclusively on inflight entertainment system
Updated 27 June 2022

Emirates to air Shahid content exclusively on inflight entertainment system

Emirates to air Shahid content exclusively on inflight entertainment system
  • MBC Group’s Natasha Matos-Hemingway: We are excited to offer Shahid’s content for Emirates’ customers to enjoy, just in time for the busiest travel season of the year
  • The partnership sees Emirates growing its library of Arabic content on ice, which currently includes over 420 audio channels and 170 film and TV show channels

DUBAI: Emirates has partnered with MBC Group’s streaming platform Shahid to offer premium content exclusively on its inflight entertainment system, ice.

The partnership makes ice the only channel to offer access to Shahid Originals, aside from the streaming service’s own premium subscription online platform.

“We are excited to offer Shahid’s content for Emirates’ customers to enjoy, just in time for the busiest travel season of the year,” said Natasha Matos-Hemingway, chief commercial and marketing officer (VOD) at MBC Group.

Starting in July, 135 hours of Shahid content from 15 shows will be available on ice.

The content has subtitles to ensure its accessibility to a large international audience.

The partnership sees Emirates growing its library of Arabic content on ice, which currently includes over 420 audio channels and 170 film and TV show channels.

“We are excited to welcome the world’s leading Arabic streaming service content on board so passengers can catch up on all their favorite entertainment inflight, just as they do at home,” said Patrick Brannelly, Emirates’ senior vice-president, Retail, IFE & Connectivity.

Shahid’s biggest original production “Rashash,” which has been hugely popular in the Arab region, will be streamed for the first time by an airline on Emirates.

Other titles include “Anbar 6,” “Hell’s Gate,” “Al Shak,” “Dofa'at Beirut,” and “Salon Zahra.”

“Emirates is our first airline partner, and their global footprint enables us to reach viewers from many new markets and broaden the reach of our shows and brand — there is no better match for our ambitions,” Matos-Hemingway added.


Google hit with antitrust complaint by Danish job search rival

Google hit with antitrust complaint by Danish job search rival
Updated 27 June 2022

Google hit with antitrust complaint by Danish job search rival

Google hit with antitrust complaint by Danish job search rival
  • The complaint could accelerate EU antitrust chief Margrethe Vestager’s scrutiny of Google for Jobs

BRUSSELS: Google was hit with an antitrust complaint on Monday after a Danish online job-search rival took its grievance to EU regulators, alleging the Alphabet unit had unfairly favored its own job search service.
The complaint could accelerate EU antitrust chief Margrethe Vestager’s scrutiny of the service, Google for Jobs, three years after it first came under her microscope. Since then the EU has taken no specific action relating to the online job-search sector.
The European Commission and Google did not immediately respond to requests for comment sent out of office hours.
Google, which has been fined more than $8.4 billion (€8 billion) by Vestager in recent years for various anti-competitive practices, has previously said it made changes in Europe after complaints from online job-search rivals.
Launched in Europe in 2018, Google for Jobs triggered criticism from 23 online job-search websites in 2019. They said they had lost market share after the online search giant had allegedly used its market power to push its new service.
Google’s service links to postings aggregated from many employers, allowing candidates to filter, save and get alerts about openings, though they must go elsewhere to apply. Google places a large widget for the tool at the top of results for ordinary web searches.
Jobindex, one of the 23 critics three years ago, said Google had skewed what had been a highly competitive Danish market toward itself via anticompetitive means.
Jobindex founder and CEO Kaare Danielsen said his company had built up the largest jobs database in Denmark by the time Google for Jobs had entered the local market last year.
“Nevertheless, in the short time following the introduction of Google for Jobs in Denmark, Jobindex lost 20 percent of search traffic to Google’s inferior service,” Danielsen told Reuters.
“By putting its own inferior service at the top of results pages, Google in effect hides some of the most relevant job offerings from job seekers. Recruiters in turn may no longer reach all job seekers, unless they use Google’s job service,” he said.
“This does not just stifle competition among recruitment services but directly impairs labor markets, which are central to any economy,” Danielsen said, urging the Commission to order Google to stop the alleged anti-competitive practices, fine the company and impose periodic payments to ensure compliance.
Jobindex said it had seen examples of free-riding, with some of its own job ads copied without its permission and marketed through Google for Jobs on behalf of Jobindex’s business partners. It also cited privacy risks to job applicants and its clients.


REVIEW: Kwai app

Photo/Supplied
Photo/Supplied
Updated 27 June 2022

REVIEW: Kwai app

Photo/Supplied
  • The app has a live feature that allows users to engage with and converse with their followers, compete against other users and earn money

Kwai is a new social media platform that allows users to share and edit short videos ranging from 15 seconds to five minutes in length.

Find funny short videos, add recordings and videos of your daily life, take part in daily challenges, or look for the best memes and videos.

The app will make it easier for users to raise their profile and appear on trending pages.

Kwai recently signed a deal with many Arab influencers to enhance engagement in the Middle East.

Use the app’s video editor to create your own masterpieces by utilizing your device’s camera, adding music or filters and instantly uploading your video.

The app has a live feature that allows users to engage with and converse with their followers, compete against other users and earn money.

Rules for posting videos help to protect younger viewers, while Kwai concentrates on Arab culture, creating hashtags and challenges suitable for the Arabic audience.

The app, developed by Chinese company Beijing Kuaishou Technology, has been downloaded by 300 million users worldwide.