Meta’s Oversight Board issued 20 decisions in its first year. Is that enough?

During the period covered by the report, the board received more than a million appeals, issued 20 decisions — 14 of which overturned Meta’s own rulings — and made 86 recommendations to the company. (AFP)
1 / 2
During the period covered by the report, the board received more than a million appeals, issued 20 decisions — 14 of which overturned Meta’s own rulings — and made 86 recommendations to the company. (AFP)
During the period covered by the report, the board received more than a million appeals, issued 20 decisions — 14 of which overturned Meta’s own rulings — and made 86 recommendations to the company. (AFP)
2 / 2
During the period covered by the report, the board received more than a million appeals, issued 20 decisions — 14 of which overturned Meta’s own rulings — and made 86 recommendations to the company. (AFP)
Short Url
Updated 02 July 2022

Meta’s Oversight Board issued 20 decisions in its first year. Is that enough?

Meta’s Oversight Board issued 20 decisions in its first year. Is that enough?
  • Board shows commitment to bringing about positive change, and to lobbying Meta to do the same, But is this enough?
  • The first annual report from the independent review body, which is funded by Meta, explains the reasoning behind its 20 rulings and the 86 recommendations it has made

DUBAI: Meta’s Oversight Board has published its first annual report. Covering the period from October 2020 to December 2021, it describes the work the board has carried out in relation to how Meta, the company formerly known as Facebook, treats its users and their content, and the work that remains to be done.

The board is an independent body set up and funded by Meta to review content and content-moderation policies on Facebook and Instagram. It considers concerns raised by Meta itself and by users who have exhausted the company’s internal appeals process. It can recommend policy changes and make decisions that overrule the company’s decisions.

During the period covered by the report, the board received more than a million appeals, issued 20 decisions — 14 of which overturned Meta’s own rulings — and made 86 recommendations to the company.

“Through our first Annual Report, we’re able to demonstrate the significant impact the board has had on pushing Meta to become more transparent in its content policies and fairer in its content decisions,” Thomas Hughes, the board’s director, told Arab News.

Through our first Annual Report, we’re able to demonstrate the significant impact the board has had on pushing Meta to become more transparent in its content policies and fairer in its content decisions.

Thomas Hughes, Director of Meta’s Oversight Board

One of the cases the board considered concerns a post that appeared on media organization Al Jazeera Arabic’s verified page in May 2021, and which was subsequently shared by a Facebook user in Egypt. It consisted of Arabic text and a photo showing two men, their faces covered, who were wearing camouflage and headbands featuring the insignia of the Palestinian Al-Qassam Brigades.

The text read: “The resistance leadership in the common room gives the occupation a respite until 6 p.m. to withdraw its soldiers from Al-Aqsa Mosque and Sheikh Jarrah neighborhood, otherwise he who warns is excused. Abu Ubaida – Al-Qassam Brigades military spokesman.”

The user who shared the post commented on it in Arabic by adding the word “ooh.”

Meta initially removed the post because Al-Qassam Brigades and its spokesperson, Abu Ubaida, are designated under Facebook’s Dangerous Individuals and Organizations community standard. However, it restored the post based on a ruling by the board.

The board said in its report that while the community standard policy clearly prohibits “channeling information or resources, including official communications, on behalf of a designated entity,” it also noted there is an exception to this rule for content that is published as “news reporting.” It added that the content in this case was a “reprint of a widely republished news report” by Al Jazeera and did not include any major changes other than the “addition of the non-substantive comment, ‘ooh.’”

Meta was unable to explain why two of its reviewers judged the content to be in violation of the platform’s content policies but noted that moderators are not required to record their reasoning for individual content decisions.




Meta has agreed to our call to ensure all updates to its policies
are translated into all languages, says Thomas Hughes, Director of Meta’s Oversight Board

According to the report, the case also highlights the board’s objective of ensuring users are treated fairly because “the post, consisting of a republication of a news item from a legitimate outlet, was treated differently from content posted by the news organization itself.”

Based on allegations that Facebook was censoring Palestinian content, the board asked the platform a number of questions, including whether it had received any requests from Israel to remove content related to the 2021 Israeli-Palestinian conflict.

In response, Facebook said that it had not received any valid, legal requests from a government authority related to the user’s content in this case. However, it declined to provide any other requested information.

The board therefore recommended an independent review of these issues, as well as greater transparency about how Facebook responds to government requests.

“Following recommendations we issued after a case decision involving Israel/Palestine, Meta is conducting a review, using an independent body, to determine whether Facebook’s content-moderation community standards in Arabic and Hebrew are being applied without bias,” said Hughes.

In another case, the Oversight Board overturned Meta’s decision to remove an Instagram post by a public account that allows the discussion of queer narratives in Arabic culture. The post consisted of a series of pictures with a caption, in Arabic and English, explaining how each picture illustrated a different word that can be used in a derogatory way in the Arab world to describe men with “effeminate mannerisms.”

Meta is conducting a review, using an independent body, to determine whether Facebook’s content-moderation community standards in Arabic and Hebrew are being applied without bias.

Thomas Hughes, Director of Meta’s Oversight Board

Meta removed the content for violating its hate speech policies but restored it when the user appealed. However, it later removed the content a second time for violating the same policies, after other users reported it.

According to the board, this was a “clear error, which was not in line with Meta’s hate speech policy.” It said that while the post does contain terms that are considered slurs, it is covered by an exception covering speech that is “used self-referentially or in an empowering way,” and also an exception that allows the quoting of hate speech to “condemn it or raise awareness.”

Each time the post was reported, a different moderator reviewed it. The board was, therefore, “concerned that reviewers may not have sufficient resources in terms of capacity or training to prevent the kind of mistake seen in this case.”

Hughes said: “As demonstrated in this report, we have a track record of success in getting Meta to consider how it handles posts in Arabic.

“We’ve succeeded in getting Meta to ensure its community standards are translated into all relevant languages, prioritizing regions where conflict or unrest puts users at most risk of imminent harm. Meta has also agreed to our call to ensure all updates to its policies are translated into all languages.”

These cases illustrate the board’s commitment to bringing about positive change, and to lobbying Meta to do the same, whether that means restoring an improperly deleted post or agreeing to an independent review of a case. But is this enough?

This month, Facebook failed once again when it faced a test of how capable it is of detecting obviously unacceptable violent hate speech. The test was carried out by nonprofit groups Global Witness and Foxglove, which created 12 text-based adverts which featured dehumanizing hate speech that called for the murder of people belonging to Ethiopia’s three main ethnic groups — the Amhara, the Oromo and the Tigrayans — and submitted them to the platform. Despite the clearly objectionable content, Facebook’s systems approved the adverts for publication.

In March, Global Witness ran a similar test using adverts about Myanmar that used similar hate speech. Facebook also failed to detect those. The ads were not actually published on Facebook because Global Witness alerted Meta to the test and the violations the platform had failed to detect.

In another case, the Oversight Board upheld Meta’s initial decision to remove a post alleging the involvement of ethnic Tigrayan civilians in atrocities carried out in the Amhara region of Ethiopia. However, Meta restored the post after a user appealed to the board, so the company had to once again remove the content from the platform.

In November 2021, Meta announced that it had removed a post by Ethiopia’s prime minister, Abiy Ahmed Ali, in which he urged citizens to rise up and “bury” rival Tigray forces who threatened the country’s capital. His verified Facebook page remains active, however, and has 4.1 million followers.

In addition to its failures over content relating to Myanmar and Ethiopia, Facebook has long been accused by rights activists of suppressing posts by Palestinians.

“Facebook has suppressed content posted by Palestinians and their supporters speaking out about human rights issues in Israel and Palestine,” said Deborah Brown, a senior digital rights researcher and advocate at Human Rights Watch.

During the May 2021 Israeli-Palestinian conflict, Facebook and Instagram removed content posted by Palestinians and posts that expressed support for Palestine. HRW documented several instances of this, including one in which Instagram removed a screenshot of the headlines and photos from three New York Times op-ed articles, to which the user had added a caption that urged Palestinians to “never concede” their rights.

In another instance, Instagram removed a post that included a picture of a building and the caption: “This is a photo of my family’s building before it was struck by Israeli missiles on Saturday, May 15, 2021. We have three apartments in this building.”

Digital rights group Sada Social said that in May 2021 alone it documented more than 700 examples of social media networks removing or restricting access to Palestinian content.

According to HRW, Meta’s acknowledgment of errors that were made and attempts to correct some of them are insufficient and do not address the scale and scope of reported content restrictions, nor do they adequately explain why they occurred in the first place.

Hughes acknowledged that some of the commitments to change made by Meta will take time to implement but added that it is important to ensure that they are “not kicked into the long grass and forgotten about.”

Meta admitted this year in its first Quarterly Update on the Oversight Board that it takes time to implement recommendations “because of the complexity and scale associated with changing how we explain and enforce our policies, and how we inform users of actions we’ve taken and what they can do about it.”

In the meantime, Hughes added: “The Board will continue to play a key role in the collective effort by companies, governments, academia and civil society to shape a brighter, safer digital future that will benefit people everywhere.”

However, the Oversight Board only reviews cases reported by users or by Meta itself. According to some experts, the issues with Meta go far beyond the current scope of the board’s mandate.

“For an oversight board to address these issues (Russian interference in the US elections), it would need jurisdiction not only over personal posts but also political ads,” wrote Dipayan Ghosh, co-director of the Digital Platforms and Democracy Project at the Mossavar-Rahmani Center for Business and Government at the Harvard Kennedy School.

“Beyond that, it would need to be able to not only take down specific pieces of content but also to halt the flow of American consumer data to Russian operatives and change the ways that algorithms privilege contentious content.”

He went on to suggest that the board’s authority should be expanded from content takedowns to include “more critical concerns” such as the company’s data practices and algorithmic decision-making because “no matter where we set the boundaries, Facebook will always want to push them. It knows no other way to maintain its profit margins.”


Asset managers on alert after ‘WhatsApp’ crackdown on banks

Asset managers on alert after ‘WhatsApp’ crackdown on banks
Updated 18 August 2022

Asset managers on alert after ‘WhatsApp’ crackdown on banks

Asset managers on alert after ‘WhatsApp’ crackdown on banks
  • Demand for software to record, archive messaging on the rise
  • Continued remote working underscores risk of compliance missteps with banks paying hundreds of millions of dollars in regulatory fines

LONDON: Asset managers are tightening controls on personal communication tools such as WhatsApp as they join banks in trying to ensure employees play by the rules when they do business with clients remotely.
Regulators had already begun to clamp down on the use of unauthorized messaging tools to discuss potentially market-moving matters, but the issue gathered urgency when the pandemic forced more finance staff to work from home in 2020.
Most of the companies caught in communications and record-keeping probes by the US Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC) have been banks — which have collectively been fined or have set aside more than $1 billion to cover regulatory penalties.
But fund firms with billions of dollars in assets are also increasing their scrutiny of how staff and clients interact.
“It is the hottest topic in the industry right now,” said one deals banker, who declined to be named in keeping with his employer’s rules on speaking to the media.
Reuters reported last year the SEC was looking into whether Wall Street banks had adequately documented employees’ work-related communications, and JPMorgan was fined $200 million in December for “widespread” failures.
German asset manager DWS said last month it had set aside 12 million euros ($12 million) to cover potential US fines linked to investigations into its employees’ use of unapproved devices and record-keeping requirements, joining a host of banks making similar provisions, including Bank of America, Morgan Stanley and Credit Suisse.
Sources at several other investment firms — described in the financial community as the ‘buy-side’ — including Amundi, AXA Investment Management, BNP Paribas Asset Management and JPMorgan Asset Management, told Reuters they have deployed tools to keep all communications between staff and clients compliant.
Spokespeople for the SEC and CFTC declined to comment on whether their investigations could extend beyond the banks, but industry sources expect authorities to cast their nets wider across the finance industry and even into government.
Last month Britain’s Information Commissioner’s Office (ICO), the country’s top data protection watchdog, called for a review of the use of WhatsApp, private emails and other messaging apps by government officials after an investigation found “inadequate data security” during the pandemic.
Regulations governing financial institutions have progressively been tightened since the global financial crisis of 2007-9 and companies have long recorded staff communications to and from office phones.
This practice is designed to deter and uncover infringements such as insider trading and “front-running,” or trading on information that is not yet public, as well as ensuring best practice in terms of treatment of customers.
But with thousands of finance workers and their clientele still working remotely after decamping from company offices at the start of the pandemic, some sensitive conversations that should be recorded remain at risk of being inadvertently held over informal or unauthorized channels.
Brad Levy, CEO of business messaging software firm Symphony, said concerns on managing that risk had driven a surge in interest for software upgrades that make conversations on popular messenging tools including Meta Platforms’ WhatsApp recordable.
“Most believe the breadth of these investigations will go wider as they go deeper,” Levy said.
“Many markets participants have retention and surveillance requirements so are likely to take a view, including being more proactive without being a direct target.”
He said Symphony’s user base has more than doubled since the pandemic to 600,000, spanning 1,000 financial institutions including JPMorgan and Goldman Sachs.
Symphony peer Movius also said its business lines specializing in making WhatsApp and other tools recordable have more than doubled in size in the space of a year, with sales to asset managers a growing component.
“Many on the buy-side have recognized that you can’t just rely on SMS and voice calls,” said Movius Chief Executive Ananth Siva, adding that the company was also seeking to work with other highly-regulated industries including health care.
Movius software integrates third-party communications tools such as email, Zoom, Microsoft Teams and WhatsApp into one system that can be recorded and archived as required, he said.
Amundi, AXA IM, BNPP AM and JPMorgan Asset Management all confirmed they had adopted Symphony software but declined to comment on the full breadth of services they used or when these had been rolled out.
Amundi and AXA IM both confirmed they used Symphony services for team communications, while AXA IM also said they used it for market information.
Amundi, BNPP AM and JP Morgan AM declined to comment on whether they thought regulators would seek to investigate record keeping at asset managers after enforcement actions against the banks were completed.
A spokesperson for BNPP AM said it had banned the use of WhatsApp for client communications due to compliance, legal and risk considerations including General Data Protection Regulation (GDPR).


TV viewership among UK youth slumps amid ‘generation gap,’ report finds

TV viewership among UK youth slumps amid ‘generation gap,’ report finds
Updated 17 August 2022

TV viewership among UK youth slumps amid ‘generation gap,’ report finds

TV viewership among UK youth slumps amid ‘generation gap,’ report finds
  • ‘Young, vibrant’ MENA population bucks trend with streaming surge

LONDON: A new Ofcom report released on Wednesday found that young people in the UK watch almost seven times less TV than people aged over 65.

The UK’s communications regulator said that the “generation gap” in the way media is consumed has reached an all-time high.

Brits aged 16-24 reportedly favor streaming platforms and social media over traditional broadcast TV and spend an average of 53 minutes per day watching TV — a decrease of two-thirds over the past decade.

“The streaming revolution is stretching the TV generation gap, creating a stark divide in the viewing habits of younger and older people,” said Ian Macrae, Ofcom director of market intelligence.

“Traditional broadcasters face tough competition from online streaming platforms, which they’re partly meeting through the popularity of their own on-demand player apps, while broadcast television is still the place to go for big events that bring the nation together, such as the Euros final or the Jubilee celebrations,” he added. 

However, the latest market study undertaken by Arabsat in conjunction with Ipsos in 2021 found that TV viewership in the Middle East and North Africa region “boasts a young, vibrant,and diverse community” with 45 percent of viewers aged under 30.

This trend, in stark contrast with the Ofcom report, illustrates “the strong sustainable relevance of satellite TV also amongst younger TV audiences in MENA.”

Ofcom attributed the decline in TV viewership among young people in the UK to the rise in popular streaming services and short-form video platforms.

In its report, the regulator said about one in five UK homes have a subscription to all three of the biggest streaming services: Netflix, Disney+ and Amazon Prime Video.

The regulator also warned that public sector broadcasters will continue to experience declines in viewership over the coming years.

MENA’s streaming industry has been “rapidly increasing,” according to an independent study by market research firm Dataxis. The region’s streaming platforms saw a 30 percent increase in subscribers between 2020 and 2021, reaching close to 10 million users in 18 countries.

By 2026 subscriptions in the region are expected to triple to close to 30 million.

However, the Arabsat report said: “Satellite TV continues to remain the strongest mode of content distribution in the region, with 93 percent total market share.”


Airbnb targets illegal get-togethers with ‘anti-party technology’

Airbnb targets illegal get-togethers with ‘anti-party technology’
Updated 17 August 2022

Airbnb targets illegal get-togethers with ‘anti-party technology’

Airbnb targets illegal get-togethers with ‘anti-party technology’
  • Move comes after property rental company made a ban on house parties permanent earlier this year

LONDON: Airbnb said on Tuesday that it will roll out “anti-party technology” as part of efforts to stop illegal partying in its listed properties.

The new system, which will be deployed initially in North America, will look at a range of factors to identify types of reservations that are likely to result in unlawful parties. These include “history of positive reviews (or lack of positive reviews), length of time the guest has been on Airbnb, length of the trip, distance to the listing, and weekend versus weekday.” 

Airbnb said in a statement that “the primary objective is attempting to reduce the ability of bad actors to throw unauthorized parties which negatively impact our hosts, neighbors and the communities we serve. 

“It’s integral to our commitment to our host community — who respect their neighbors and want no part of the property damage and other issues that may come with unauthorized or disruptive parties.”

The announcement comes after the company decided to make a ban on house parties permanent earlier this year.

Since October 2021, Airbnb has been trialling the technology in select areas of Australia, where it recorded a “35 percent drop in incidents of unauthorized parties,” the company said.

Similar initiatives were previously put in place by the peer-to-peer property rental platform. In July 2020, it introduced a system that prevented under-25s in North America from booking large houses close to where they live if they did not have a history of positive reviews.

“As we get more reservations and bookings, we look at how things are trending, how our metrics are trending,” said Naba Banerjee, Airbnb’s global head of product, operations, and strategy for trust and safety.

“We try to look at the rate of safety incidents, and we try to make sure that we are launching solutions that constantly try to work on that rate.”

Airbnb has long sought to crack down on illegal parties. The company announced in 2019 that “party homes” would be banned after five people were killed in a shooting at a Halloween gathering in an Airbnb property in Orinda, California, where over 100 people were reportedly present.

In 2020, the company began imposing stricter regulations around its “house party” policy amid the global pandemic. Both the “event friendly” search filter and “parties and events allowed” house rules were removed as it sought to counter a rise in house party bookings as bars and clubs were closed.

More than 6,600 guests and some hosts were suspended in 2021 for attempting to violate the party ban, the company said.

Airbnb also announced the introduction of a neighborhood support helpline to “facilitate direct communication with neighbors regarding potential parties in progress or concerns with any nearby listings.”

“We are, at the end of the day, an open marketplace, we are making real-world connections, and we are often a mirror of society. And no solution is 100 percent perfect,” Banerjee said.


TikTok to clamp down on paid political posts by influencers ahead of US midterms

TikTok to clamp down on paid political posts by influencers ahead of US midterms
Updated 17 August 2022

TikTok to clamp down on paid political posts by influencers ahead of US midterms

TikTok to clamp down on paid political posts by influencers ahead of US midterms
  • Critics and lawmakers accuse TikTok and rival social media companies of doing too little to stop political misinformation and divisive content from spreading on their apps

LONDON: TikTok will work to prevent content creators from posting paid political messages on the short-form video app, as part of its preparation for the US midterm election in November, the company said on Wednesday.
Critics and lawmakers accuse TikTok and rival social media companies including Meta Platforms and Twitter of doing too little to stop political misinformation and divisive content from spreading on their apps.
While TikTok has banned paid political ads since 2019, campaign strategists have skirted the ban by paying influencers to promote political issues.
The company seeks to close the loophole by hosting briefings with creators and talent agencies to remind them that posting paid political content is against TikTok’s policies, said Eric Han, TikTok’s head of US safety, during a briefing with reporters.
He added that internal teams, including those that work on trust and safety, will monitor for signs that creators are being paid to post political content, and the company will also rely on media reports and outside partners to find violating posts.
“We saw this as an issue in 2020,” Han said. “Once we find out about it ... we will remove it from our platform.”
TikTok broadcast its plan following similar updates from Meta and Twitter.
Meta, which owns Facebook and Instagram, said Tuesday it will restrict political advertisers from running new ads a week before the election, an action it also took in 2020.
Last week, Twitter said it planned to revive previous strategies for the midterm election, including placing labels in front of some misleading tweets and inserting reliable information into timelines to debunk false claims before they spread further online. Civil and voting rights experts said the plan was not adequate to prepare for the election.


Royal Jordanian set to sponsor Arab Influencers Forum

Royal Jordanian set to sponsor Arab Influencers Forum
Updated 17 August 2022

Royal Jordanian set to sponsor Arab Influencers Forum

Royal Jordanian set to sponsor Arab Influencers Forum
  • The airline’s CEO stated that the its sponsorship is in line with its vision to support all efforts promoting Jordan

AMMAN: Royal Jordanian Airlines is sponsoring the inaugural City Talk, a forum for Arab influencers due to take place in Jordan in early October, the Jordan News Agency reported on Tuesday. The airline said that it will also serve as official carrier for the forum’s guests from across the region.

The event is being organized by the Jordan Tourism Board and Omnes Media, a digital-media and communications platform based in Dubai.

Royal Jordanian CEO Samer Majali said the airline’s sponsorship of the event reflects its vision and desire to support all initiatives and events that promote Jordan.

He added that by attracting social media content creators and marketing industry professionals from across the Arab world, the forum will help to market the culture and heritage of Jordan and its tourism sector.

City Talk is scheduled to take place Oct. 2-5 at King Hussein bin Talal Convention Center near Sweimeh, on the Dead Sea shore. More than 500 Arab social media influencers and industry leaders are expected to attend.

The forum will explore and discuss recent advances in the marketing and advertising industry. The schedule includes six panel discussions and six workshops, along with daily meetings with influential Arab figures.