Facebook’s language gaps weaken screening of hate, terrorism

Facebook reported internally it had erred in nearly half of all Arabic language takedown requests submitted for appeal. (File/AFP)
Facebook reported internally it had erred in nearly half of all Arabic language takedown requests submitted for appeal. (File/AFP)
Short Url
Updated 25 October 2021

Facebook’s language gaps weaken screening of hate, terrorism

Facebook reported internally it had erred in nearly half of all Arabic language takedown requests submitted for appeal. (File/AFP)
  • Arabic poses particular challenges to Facebook’s automated systems and human moderators, each of which struggles to understand spoken dialects
  • In some of the world’s most volatile regions, terrorist content and hate speech proliferate because Facebook remains short on moderators who speak local languages and understand cultural contexts

DUBAI: As the Gaza war raged and tensions surged across the Middle East last May, Instagram briefly banned the hashtag #AlAqsa, a reference to the Al-Aqsa Mosque in Jerusalem’s Old City, a flash point in the conflict.
Facebook, which owns Instagram, later apologized, explaining its algorithms had mistaken the third-holiest site in Islam for the militant group Al-Aqsa Martyrs Brigade, an armed offshoot of the secular Fatah party.
For many Arabic-speaking users, it was just the latest potent example of how the social media giant muzzles political speech in the region. Arabic is among the most common languages on Facebook’s platforms, and the company issues frequent public apologies after similar botched content removals.
Now, internal company documents from the former Facebook product manager-turned-whistleblower Frances Haugen show the problems are far more systemic than just a few innocent mistakes, and that Facebook has understood the depth of these failings for years while doing little about it.
Such errors are not limited to Arabic. An examination of the files reveals that in some of the world’s most volatile regions, terrorist content and hate speech proliferate because the company remains short on moderators who speak local languages and understand cultural contexts. And its platforms have failed to develop artificial-intelligence solutions that can catch harmful content in different languages.
In countries like Afghanistan and Myanmar, these loopholes have allowed inflammatory language to flourish on the platform, while in Syria and the Palestinian territories, Facebook suppresses ordinary speech, imposing blanket bans on common words.
“The root problem is that the platform was never built with the intention it would one day mediate the political speech of everyone in the world,” said Eliza Campbell, director of the Middle East Institute’s Cyber Program. “But for the amount of political importance and resources that Facebook has, moderation is a bafflingly under-resourced project.”
This story, along with others published Monday, is based on Haugen’s disclosures to the Securities and Exchange Commission, which were also provided to Congress in redacted form by her legal team. The redacted versions were reviewed by a consortium of news organizations, including The Associated Press.
In a statement to the AP, a Facebook spokesperson said that over the last two years the company has invested in recruiting more staff with local dialect and topic expertise to bolster its review capacity around the world.
But when it comes to Arabic content moderation, the company said, “We still have more work to do. ... We conduct research to better understand this complexity and identify how we can improve.”
In Myanmar, where Facebook-based misinformation has been linked repeatedly to ethnic and religious violence, the company acknowledged in its internal reports that it had failed to stop the spread of hate speech targeting the minority Rohingya Muslim population.
The Rohingya’s persecution, which the US has described as ethnic cleansing, led Facebook to publicly pledge in 2018 that it would recruit 100 native Myanmar language speakers to police its platforms. But the company never disclosed how many content moderators it ultimately hired or revealed which of the nation’s many dialects they covered.
Despite Facebook’s public promises and many internal reports on the problems, the rights group Global Witness said the company’s recommendation algorithm continued to amplify army propaganda and other content that breaches the company’s Myanmar policies following a military coup in February.
In India, the documents show Facebook employees debating last March whether it could clamp down on the “fear mongering, anti-Muslim narratives” that Prime Minister Narendra Modi’s far-right Hindu nationalist group, Rashtriya Swayamsevak Sangh, broadcasts on its platform.
In one document, the company notes that users linked to Modi’s party had created multiple accounts to supercharge the spread of Islamophobic content. Much of this content was “never flagged or actioned,” the research found, because Facebook lacked moderators and automated filters with knowledge of Hindi and Bengali.
Arabic poses particular challenges to Facebook’s automated systems and human moderators, each of which struggles to understand spoken dialects unique to each country and region, their vocabularies salted with different historical influences and cultural contexts.
The Moroccan colloquial Arabic, for instance, includes French and Berber words, and is spoken with short vowels. Egyptian Arabic, on the other hand, includes some Turkish from the Ottoman conquest. Other dialects are closer to the “official” version found in the Qur’an. In some cases, these dialects are not mutually comprehensible, and there is no standard way of transcribing colloquial Arabic.
Facebook first developed a massive following in the Middle East during the 2011 Arab Spring uprisings, and users credited the platform with providing a rare opportunity for free expression and a critical source of news in a region where autocratic governments exert tight controls over both. But in recent years, that reputation has changed.
Scores of Palestinian journalists and activists have had their accounts deleted. Archives of the Syrian civil war have disappeared. And a vast vocabulary of everyday words have become off-limits to speakers of Arabic, Facebook’s third-most common language with millions of users worldwide.
For Hassan Slaieh, a prominent journalist in the blockaded Gaza Strip, the first message felt like a punch to the gut. “Your account has been permanently disabled for violating Facebook’s Community Standards,” the company’s notification read. That was at the peak of the bloody 2014 Gaza war, following years of his news posts on violence between Israel and Hamas being flagged as content violations.
Within moments, he lost everything he’d collected over six years: personal memories, stories of people’s lives in Gaza, photos of Israeli airstrikes pounding the enclave, not to mention 200,000 followers. The most recent Facebook takedown of his page last year came as less of a shock. It was the 17th time that he had to start from scratch.
He had tried to be clever. Like many Palestinians, he’d learned to avoid the typical Arabic words for “martyr” and “prisoner,” along with references to Israel’s military occupation. If he mentioned militant groups, he’d add symbols or spaces between each letter.
Other users in the region have taken an increasingly savvy approach to tricking Facebook’s algorithms, employing a centuries-old Arabic script that lacks the dots and marks that help readers differentiate between otherwise identical letters. The writing style, common before Arabic learning exploded with the spread of Islam, has circumvented hate speech censors on Facebook’s Instagram app, according to the internal documents.
But Slaieh’s tactics didn’t make the cut. He believes Facebook banned him simply for doing his job. As a reporter in Gaza, he posts photos of Palestinian protesters wounded at the Israeli border, mothers weeping over their sons’ coffins, statements from the Gaza Strip’s militant Hamas rulers.
Criticism, satire and even simple mentions of groups on the company’s Dangerous Individuals and Organizations list — a docket modeled on the US government equivalent — are grounds for a takedown.
“We were incorrectly enforcing counterterrorism content in Arabic,” one document reads, noting the current system “limits users from participating in political speech, impeding their right to freedom of expression.”
The Facebook blacklist includes Gaza’s ruling Hamas party, as well as Hezbollah, the militant group that holds seats in Lebanon’s Parliament, along with many other groups representing wide swaths of people and territory across the Middle East, the internal documents show, resulting in what Facebook employees describe in the documents as widespread perceptions of censorship.
“If you posted about militant activity without clearly condemning what’s happening, we treated you like you supported it,” said Mai el-Mahdy, a former Facebook employee who worked on Arabic content moderation until 2017.
In response to questions from the AP, Facebook said it consults independent experts to develop its moderation policies and goes “to great lengths to ensure they are agnostic to religion, region, political outlook or ideology.”
“We know our systems are not perfect,” it added.
The company’s language gaps and biases have led to the widespread perception that its reviewers skew in favor of governments and against minority groups.
Former Facebook employees also say that various governments exert pressure on the company, threatening regulation and fines. Israel, a lucrative source of advertising revenue for Facebook, is the only country in the Mideast where Facebook operates a national office. Its public policy director previously advised former right-wing Prime Minister Benjamin Netanyahu.
Israeli security agencies and watchdogs monitor Facebook and bombard it with thousands of orders to take down Palestinian accounts and posts as they try to crack down on incitement.
“They flood our system, completely overpowering it,” said Ashraf Zeitoon, Facebook’s former head of policy for the Middle East and North Africa region, who left in 2017. “That forces the system to make mistakes in Israel’s favor. Nowhere else in the region had such a deep understanding of how Facebook works.”
Facebook said in a statement that it fields takedown requests from governments no differently from those from rights organizations or community members, although it may restrict access to content based on local laws.
“Any suggestion that we remove content solely under pressure from the Israeli government is completely inaccurate,” it said.
Syrian journalists and activists reporting on the country’s opposition also have complained of censorship, with electronic armies supporting embattled President Bashar Assad aggressively flagging dissident content for removal.
Raed, a former reporter at the Aleppo Media Center, a group of antigovernment activists and citizen journalists in Syria, said Facebook erased most of his documentation of Syrian government shelling on neighborhoods and hospitals, citing graphic content.
“Facebook always tells us we break the rules, but no one tells us what the rules are,” he added, giving only his first name for fear of reprisals.
In Afghanistan, many users literally cannot understand Facebook’s rules. According to an internal report in January, Facebook did not translate the site’s hate speech and misinformation pages into Dari and Pashto, the two most common languages in Afghanistan, where English is not widely understood.
When Afghan users try to flag posts as hate speech, the drop-down menus appear only in English. So does the Community Standards page. The site also doesn’t have a bank of hate speech terms, slurs and code words in Afghanistan used to moderate Dari and Pashto content, as is typical elsewhere. Without this local word bank, Facebook can’t build the automated filters that catch the worst violations in the country.
When it came to looking into the abuse of domestic workers in the Middle East, internal Facebook documents acknowledged that engineers primarily focused on posts and messages written in English. The flagged-words list did not include Tagalog, the major language of the Philippines, where many of the region’s housemaids and other domestic workers come from.
In much of the Arab world, the opposite is true — the company over-relies on artificial-intelligence filters that make mistakes, leading to “a lot of false positives and a media backlash,” one document reads. Largely unskilled human moderators, in over their heads, tend to passively field takedown requests instead of screening proactively.
Sophie Zhang, a former Facebook employee-turned-whistleblower who worked at the company for nearly three years before being fired last year, said contractors in Facebook’s Ireland office complained to her they had to depend on Google Translate because the company did not assign them content based on what languages they knew.
Facebook outsources most content moderation to giant companies that enlist workers far afield, from Casablanca, Morocco, to Essen, Germany. The firms don’t sponsor work visas for the Arabic teams, limiting the pool to local hires in precarious conditions — mostly Moroccans who seem to have overstated their linguistic capabilities. They often get lost in the translation of Arabic’s 30-odd dialects, flagging inoffensive Arabic posts as terrorist content 77 percent of the time, one document said.
“These reps should not be fielding content from non-Maghreb region, however right now it is commonplace,” another document reads, referring to the region of North Africa that includes Morocco. The file goes on to say that the Casablanca office falsely claimed in a survey it could handle “every dialect” of Arabic. But in one case, reviewers incorrectly flagged a set of Egyptian dialect content 90 percent of the time, a report said.
Iraq ranks highest in the region for its reported volume of hate speech on Facebook. But among reviewers, knowledge of Iraqi dialect is “close to non-existent,” one document said.
“Journalists are trying to expose human rights abuses, but we just get banned,” said one Baghdad-based press freedom activist, who spoke on condition of anonymity for fear of reprisals. “We understand Facebook tries to limit the influence of militias, but it’s not working.”
Linguists described Facebook’s system as flawed for a region with a vast diversity of colloquial dialects that Arabic speakers transcribe in different ways.
“The stereotype that Arabic is one entity is a major problem,” said Enam Al-Wer, professor of Arabic linguistics at the University of Essex, citing the language’s “huge variations” not only between countries but class, gender, religion and ethnicity.
Despite these problems, moderators are on the front lines of what makes Facebook a powerful arbiter of political expression in a tumultuous region.
Although the documents from Haugen predate this year’s Gaza war, episodes from that 11-day conflict show how little has been done to address the problems flagged in Facebook’s own internal reports.
Activists in Gaza and the West Bank lost their ability to livestream. Whole archives of the conflict vanished from newsfeeds, a primary portal of information for many users. Influencers accustomed to tens of thousands of likes on their posts saw their outreach plummet when they posted about Palestinians.
“This has restrained me and prevented me from feeling free to publish what I want for fear of losing my account,” said Soliman Hijjy, a Gaza-based journalist whose aerials of the Mediterranean Sea garnered tens of thousands more views than his images of Israeli bombs — a common phenomenon when photos are flagged for violating community standards.
During the war, Palestinian advocates submitted hundreds of complaints to Facebook, often leading the company to concede error and reinstate posts and accounts.
In the internal documents, Facebook reported it had erred in nearly half of all Arabic language takedown requests submitted for appeal.
“The repetition of false positives creates a huge drain of resources,” it said.
In announcing the reversal of one such Palestinian post removal last month, Facebook’s semi-independent oversight board urged an impartial investigation into the company’s Arabic and Hebrew content moderation. It called for improvement in its broad terrorism blacklist to “increase understanding of the exceptions for neutral discussion, condemnation and news reporting,” according to the board’s policy advisory statement.
Facebook’s internal documents also stressed the need to “enhance” algorithms, enlist more Arab moderators from less-represented countries and restrict them to where they have appropriate dialect expertise.
“With the size of the Arabic user base and potential severity of offline harm … it is surely of the highest importance to put more resources to the task to improving Arabic systems,” said the report.
But the company also lamented that “there is not one clear mitigation strategy.”
Meanwhile, many across the Middle East worry the stakes of Facebook’s failings are exceptionally high, with potential to widen long-standing inequality, chill civic activism and stoke violence in the region.
“We told Facebook: Do you want people to convey their experiences on social platforms, or do you want to shut them down?” said Husam Zomlot, the Palestinian envoy to the United Kingdom, who recently discussed Arabic content suppression with Facebook officials in London. “If you take away people’s voices, the alternatives will be uglier.”


Australia to introduce new laws to force media platforms to unmask online trolls

Australian Prime Minister Scott Morrison gestures during a press conference at Parliament House in Canberra, Thursday, Nov. 25, 2021. (AP)
Australian Prime Minister Scott Morrison gestures during a press conference at Parliament House in Canberra, Thursday, Nov. 25, 2021. (AP)
Updated 28 November 2021

Australia to introduce new laws to force media platforms to unmask online trolls

Australian Prime Minister Scott Morrison gestures during a press conference at Parliament House in Canberra, Thursday, Nov. 25, 2021. (AP)
  • The new legislation will introduce a complaints mechanism, so that if somebody thinks they are being defamed, bullied or attacked on social media, they will be able to require the platform to take the material down

MELBOURNE: Australia will introduce legislation to make social media giants provide details of users who post defamatory comments, Prime Minister Scott Morrison said on Sunday.
The government has been looking at the extent of the responsibility of platforms, such as Twitter and Facebook, for defamatory material published on their sites and comes after the country’s highest court ruled that publishers can be held liable for public comments on online forums.
The ruling caused some news companies like CNN to deny Australians access to their Facebook pages.
“The online world should not be a wild west where bots and bigots and trolls and others are anonymously going around and can harm people,” Morrison said at a televised press briefing. “That is not what can happen in the real world, and there is no case for it to be able to be happening in the digital world.”
The new legislation will introduce a complaints mechanism, so that if somebody thinks they are being defamed, bullied or attacked on social media, they will be able to require the platform to take the material down.
If the content is not withdrawn, a court process could force a social media platform to provide details of the commenter.
“Digital platforms — these online companies — must have proper processes to enable the takedown of this content,” Morrison said.
“They have created the space and they need to make it safe, and if they won’t, we will make them (through) laws such as this.”
 


UK migrant deaths: Priti Patel demands BBC drop ‘dehumanizing’ language 

ritain will do whatever is necessary to help secure the French coast to stop migrants risking their lives trying to cross the English Channel. (Reuters)
ritain will do whatever is necessary to help secure the French coast to stop migrants risking their lives trying to cross the English Channel. (Reuters)
Updated 27 November 2021

UK migrant deaths: Priti Patel demands BBC drop ‘dehumanizing’ language 

ritain will do whatever is necessary to help secure the French coast to stop migrants risking their lives trying to cross the English Channel. (Reuters)
  • On Wednesday, 27 people headed for the UK drowned in the English Channel near Calais after their boat sunk

LONDON: UK Home Secretary Priti Patel pledged to ask the BBC and other media channels to abandon the use of the term “migrants,” claiming that the word is “dehumanizing.”

Patel made the pledge after being challenged by the Scottish National Party MP Brendan O’Hara on the BBC’s use of ‘migrants’ to describe the 27 men, women and children who died while crossing the English Channel earlier this week.

On Wednesday, 27 people headed for the UK drowned in the English Channel near Calais after their boat sunk. Those who drowned included 17 men, seven women — one of whom was pregnant — and three children.

Following the incident, O’Hara had told the House of Commons: “Last night, I tuned in to the BBC News to get the latest on this terrible disaster and I was absolutely appalled when a presenter informed me that around 30 migrants had drowned.

“Migrants don’t drown. People drown. Men, women and children drown,” he added, urging Patel to take action and ask the BBC and other news outlets to “reflect on their use of such dehumanizing language and afford these poor people the respect that they deserve.”

Patel responded positively to O’Hara’s request, and said: “Even during the Afghan operations and evacuation I heard a lot of language that quite frankly seemed to be inappropriate around people who were fleeing.

“So yes, I will,” she pledged.

Patel had previously blamed France for the deaths of the 27 people, saying that it was up to the French to take action to prevent further tragedies.

She claimed that while there was no rapid solution to the issue of people seeking to cross the English Channel, she had reiterated an offer to send more police to France.


Rights watchdog condemns assault of Afghan journalist

Afghan journalist Ahmad Baseer Ahmadi was recently attacked while walking to his home in Kabul. (CPJ/Social Media)
Afghan journalist Ahmad Baseer Ahmadi was recently attacked while walking to his home in Kabul. (CPJ/Social Media)
Updated 27 November 2021

Rights watchdog condemns assault of Afghan journalist

Afghan journalist Ahmad Baseer Ahmadi was recently attacked while walking to his home in Kabul. (CPJ/Social Media)
  • Ahmad Baseer Ahmadi, a presenter at privately owned broadcaster Ayna TV, was walking to his house when two unidentified men assaulted him
  • In October, unidentified gunmen injured journalists Abdul Khaliq Hussaini and Alireza Sharifi in separate attacks in Kabul

LONDON: The Committee to Protect Journalists has condemned the violent attack on Afghani journalist Ahmad Baseer Ahmadi, who was assaulted in Kabul while on his way home. 

Ahmadi, a presenter at the privately owned broadcaster Ayna TV, was walking to his house when two unidentified men assaulted him and attempted to shoot him. 

The men, whose faces were covered by black handkerchiefs, reportedly shouted, “Reporter! Stop,” demanded to see his identification card and asked him where he worked. 

“The Taliban has repeatedly failed to uphold its stated commitment to press freedom, as violent attacks against journalists continue and proper investigations or accountability are nowhere to be found,” said CPJ’s Asia program coordinator, Steven Butler.

“The Taliban should reverse this trend by thoroughly investigating the attack on Ahmad Baseer Ahmadi, and holding the perpetrators accountable.”

Ahmadi’s assailants reportedly demanded he unlock his phone and open his WhatsApp and Facebook accounts. When Ahmadi refused, the men beat him with pistols and proceeded to shoot at him when he asked for help. 

The shots missed Ahmadi, but the men continued kicking him while he was on the ground, breaking his jaw. 

Since the Taliban takeover of Afghanistan last August, the CPJ has voiced concerns about the safety of Afghan journalists, reporters and media workers. 

In October, unidentified gunmen injured journalists Abdul Khaliq Hussaini and Alireza Sharifi in separate attacks in Kabul, and Taliban members beat and detained Zahidullah Husainkhil.


Award winners revealed at prestigious Middle East PR industry gongs ceremony

Award winners revealed at prestigious Middle East PR industry gongs ceremony
Updated 26 November 2021

Award winners revealed at prestigious Middle East PR industry gongs ceremony

Award winners revealed at prestigious Middle East PR industry gongs ceremony
  • 88 entries shortlisted in 56 categories for 2021 Middle East Public Relations Association awards

DUBAI: This year’s winners of a prestigious Middle Eastern public relations awards scheme were revealed at a recent presentation ceremony in the UAE.

More than 88 entries were shortlisted across 56 categories in the 2021 edition of the Middle East Public Relations Association awards.

The communications industry has been seriously impacted by the coronavirus disease (COVID-19) pandemic with many companies and organizations cutting their advertising and marketing budgets.

And the latest MEPRA awards took into account the damage caused to the sector by the global health crisis through categories such as best creative approach and best internal communications response during COVID-19, and best social impact campaign in response to the virus outbreak.

The classes saw gold trophies awarded to APCO Worldwide for its campaign “Adapting UOWD’s Education Model in the Age of the Pandemic,” Mastercard MEA for its “Priceless Together” project, and Action Global Communications for “ADEK Back to School,” respectively.

During the awards ceremony held in Dubai on Wednesday, Red Havas bagged gold for best campaign in the Middle East with Adidas’ “Beyond the Surface,” and Hill+Knowlton Strategies took silver and bronze for its PUBG Mobile “Game on Henedy,” and Facebook Inc. “#MonthofGood” campaigns, respectively.

To mark MEPRA’s 20th anniversary this year, the awards featured a new category of people’s choice for the best Middle East campaign over the last two decades, won by Weber Shandwick MENAT and Environment Agency Abu Dhabi for the “Vote Bu Tinah!” campaign.

Special gongs on the night included the chairman’s lifetime achievement award that went to Jack Pearce of Matrix Public Relations, the small in-house team of the year accolade handed to Mastercard MENA, and the large in-house team of the year prize given to the UAE government’s media office.

Agency titles were awarded to Gambit Communications for best home-grown operation as well as small agency of the year, with Acorn Strategy being crowned large agency of the year.


New Zealand PM says Facebook, others must do more against online hate

Ardern and French President Emmanuel Macron launched a global initiative to end online hate in 2019. (File/AFP)
Ardern and French President Emmanuel Macron launched a global initiative to end online hate in 2019. (File/AFP)
Updated 26 November 2021

New Zealand PM says Facebook, others must do more against online hate

Ardern and French President Emmanuel Macron launched a global initiative to end online hate in 2019. (File/AFP)
  • New Zealand PM said tech giants and world leaders needed to do “much more” to stamp out violent extremism and radicalization online

LONDON: Tech giants like Meta’s Facebook and world leaders needed to do “much more” to stamp out violent extremism and radicalization online, New Zealand Prime Minister Jacinda Ardern said on Friday.
Ardern and French President Emmanuel Macron launched a global initiative to end online hate in 2019 after a white supremacist killed 51 people at two mosques in the New Zealand city of Christchurch while live-streaming his rampage on Facebook.
This Christchurch Call initiative has been supported by more than 50 countries, international organizations and tech firms, including Facebook, Google, Twitter and Microsoft.
Ardern said on Friday the initiative had been successful in its first aim of establishing a crisis protocol, including a 24/7 network between platforms to quickly remove content, in response to events like those in Christchurch.
“We have had real world stress-testing of those systems and they have worked very effectively,” Ardern said in an interview for the upcoming Reuters Next conference.
“I am confident that we are operating more effectively than we have before,” she added. “The next challenge though, is to go further again.”
Asked what tech companies should be doing, Ardern replied: “much more.”
Ardern said the next step was to focus on prevention, looking at how people are finding or coming across hateful or terror-motivating content online and perhaps becoming radicalized.
“That’s where we are really interested in the ongoing work around algorithms and the role that we can all play to ensure that online platforms don’t become a place of radicalization,” she said.
A Christchurch Call conference earlier this year was attended by the United States and Britain.