Facebook’s language gaps weaken screening of hate, terrorism

Facebook reported internally it had erred in nearly half of all Arabic language takedown requests submitted for appeal. (File/AFP)
Facebook reported internally it had erred in nearly half of all Arabic language takedown requests submitted for appeal. (File/AFP)
Short Url
Updated 25 October 2021

Facebook’s language gaps weaken screening of hate, terrorism

Facebook reported internally it had erred in nearly half of all Arabic language takedown requests submitted for appeal. (File/AFP)
  • Arabic poses particular challenges to Facebook’s automated systems and human moderators, each of which struggles to understand spoken dialects
  • In some of the world’s most volatile regions, terrorist content and hate speech proliferate because Facebook remains short on moderators who speak local languages and understand cultural contexts

DUBAI: As the Gaza war raged and tensions surged across the Middle East last May, Instagram briefly banned the hashtag #AlAqsa, a reference to the Al-Aqsa Mosque in Jerusalem’s Old City, a flash point in the conflict.
Facebook, which owns Instagram, later apologized, explaining its algorithms had mistaken the third-holiest site in Islam for the militant group Al-Aqsa Martyrs Brigade, an armed offshoot of the secular Fatah party.
For many Arabic-speaking users, it was just the latest potent example of how the social media giant muzzles political speech in the region. Arabic is among the most common languages on Facebook’s platforms, and the company issues frequent public apologies after similar botched content removals.
Now, internal company documents from the former Facebook product manager-turned-whistleblower Frances Haugen show the problems are far more systemic than just a few innocent mistakes, and that Facebook has understood the depth of these failings for years while doing little about it.
Such errors are not limited to Arabic. An examination of the files reveals that in some of the world’s most volatile regions, terrorist content and hate speech proliferate because the company remains short on moderators who speak local languages and understand cultural contexts. And its platforms have failed to develop artificial-intelligence solutions that can catch harmful content in different languages.
In countries like Afghanistan and Myanmar, these loopholes have allowed inflammatory language to flourish on the platform, while in Syria and the Palestinian territories, Facebook suppresses ordinary speech, imposing blanket bans on common words.
“The root problem is that the platform was never built with the intention it would one day mediate the political speech of everyone in the world,” said Eliza Campbell, director of the Middle East Institute’s Cyber Program. “But for the amount of political importance and resources that Facebook has, moderation is a bafflingly under-resourced project.”
This story, along with others published Monday, is based on Haugen’s disclosures to the Securities and Exchange Commission, which were also provided to Congress in redacted form by her legal team. The redacted versions were reviewed by a consortium of news organizations, including The Associated Press.
In a statement to the AP, a Facebook spokesperson said that over the last two years the company has invested in recruiting more staff with local dialect and topic expertise to bolster its review capacity around the world.
But when it comes to Arabic content moderation, the company said, “We still have more work to do. ... We conduct research to better understand this complexity and identify how we can improve.”
In Myanmar, where Facebook-based misinformation has been linked repeatedly to ethnic and religious violence, the company acknowledged in its internal reports that it had failed to stop the spread of hate speech targeting the minority Rohingya Muslim population.
The Rohingya’s persecution, which the US has described as ethnic cleansing, led Facebook to publicly pledge in 2018 that it would recruit 100 native Myanmar language speakers to police its platforms. But the company never disclosed how many content moderators it ultimately hired or revealed which of the nation’s many dialects they covered.
Despite Facebook’s public promises and many internal reports on the problems, the rights group Global Witness said the company’s recommendation algorithm continued to amplify army propaganda and other content that breaches the company’s Myanmar policies following a military coup in February.
In India, the documents show Facebook employees debating last March whether it could clamp down on the “fear mongering, anti-Muslim narratives” that Prime Minister Narendra Modi’s far-right Hindu nationalist group, Rashtriya Swayamsevak Sangh, broadcasts on its platform.
In one document, the company notes that users linked to Modi’s party had created multiple accounts to supercharge the spread of Islamophobic content. Much of this content was “never flagged or actioned,” the research found, because Facebook lacked moderators and automated filters with knowledge of Hindi and Bengali.
Arabic poses particular challenges to Facebook’s automated systems and human moderators, each of which struggles to understand spoken dialects unique to each country and region, their vocabularies salted with different historical influences and cultural contexts.
The Moroccan colloquial Arabic, for instance, includes French and Berber words, and is spoken with short vowels. Egyptian Arabic, on the other hand, includes some Turkish from the Ottoman conquest. Other dialects are closer to the “official” version found in the Qur’an. In some cases, these dialects are not mutually comprehensible, and there is no standard way of transcribing colloquial Arabic.
Facebook first developed a massive following in the Middle East during the 2011 Arab Spring uprisings, and users credited the platform with providing a rare opportunity for free expression and a critical source of news in a region where autocratic governments exert tight controls over both. But in recent years, that reputation has changed.
Scores of Palestinian journalists and activists have had their accounts deleted. Archives of the Syrian civil war have disappeared. And a vast vocabulary of everyday words have become off-limits to speakers of Arabic, Facebook’s third-most common language with millions of users worldwide.
For Hassan Slaieh, a prominent journalist in the blockaded Gaza Strip, the first message felt like a punch to the gut. “Your account has been permanently disabled for violating Facebook’s Community Standards,” the company’s notification read. That was at the peak of the bloody 2014 Gaza war, following years of his news posts on violence between Israel and Hamas being flagged as content violations.
Within moments, he lost everything he’d collected over six years: personal memories, stories of people’s lives in Gaza, photos of Israeli airstrikes pounding the enclave, not to mention 200,000 followers. The most recent Facebook takedown of his page last year came as less of a shock. It was the 17th time that he had to start from scratch.
He had tried to be clever. Like many Palestinians, he’d learned to avoid the typical Arabic words for “martyr” and “prisoner,” along with references to Israel’s military occupation. If he mentioned militant groups, he’d add symbols or spaces between each letter.
Other users in the region have taken an increasingly savvy approach to tricking Facebook’s algorithms, employing a centuries-old Arabic script that lacks the dots and marks that help readers differentiate between otherwise identical letters. The writing style, common before Arabic learning exploded with the spread of Islam, has circumvented hate speech censors on Facebook’s Instagram app, according to the internal documents.
But Slaieh’s tactics didn’t make the cut. He believes Facebook banned him simply for doing his job. As a reporter in Gaza, he posts photos of Palestinian protesters wounded at the Israeli border, mothers weeping over their sons’ coffins, statements from the Gaza Strip’s militant Hamas rulers.
Criticism, satire and even simple mentions of groups on the company’s Dangerous Individuals and Organizations list — a docket modeled on the US government equivalent — are grounds for a takedown.
“We were incorrectly enforcing counterterrorism content in Arabic,” one document reads, noting the current system “limits users from participating in political speech, impeding their right to freedom of expression.”
The Facebook blacklist includes Gaza’s ruling Hamas party, as well as Hezbollah, the militant group that holds seats in Lebanon’s Parliament, along with many other groups representing wide swaths of people and territory across the Middle East, the internal documents show, resulting in what Facebook employees describe in the documents as widespread perceptions of censorship.
“If you posted about militant activity without clearly condemning what’s happening, we treated you like you supported it,” said Mai el-Mahdy, a former Facebook employee who worked on Arabic content moderation until 2017.
In response to questions from the AP, Facebook said it consults independent experts to develop its moderation policies and goes “to great lengths to ensure they are agnostic to religion, region, political outlook or ideology.”
“We know our systems are not perfect,” it added.
The company’s language gaps and biases have led to the widespread perception that its reviewers skew in favor of governments and against minority groups.
Former Facebook employees also say that various governments exert pressure on the company, threatening regulation and fines. Israel, a lucrative source of advertising revenue for Facebook, is the only country in the Mideast where Facebook operates a national office. Its public policy director previously advised former right-wing Prime Minister Benjamin Netanyahu.
Israeli security agencies and watchdogs monitor Facebook and bombard it with thousands of orders to take down Palestinian accounts and posts as they try to crack down on incitement.
“They flood our system, completely overpowering it,” said Ashraf Zeitoon, Facebook’s former head of policy for the Middle East and North Africa region, who left in 2017. “That forces the system to make mistakes in Israel’s favor. Nowhere else in the region had such a deep understanding of how Facebook works.”
Facebook said in a statement that it fields takedown requests from governments no differently from those from rights organizations or community members, although it may restrict access to content based on local laws.
“Any suggestion that we remove content solely under pressure from the Israeli government is completely inaccurate,” it said.
Syrian journalists and activists reporting on the country’s opposition also have complained of censorship, with electronic armies supporting embattled President Bashar Assad aggressively flagging dissident content for removal.
Raed, a former reporter at the Aleppo Media Center, a group of antigovernment activists and citizen journalists in Syria, said Facebook erased most of his documentation of Syrian government shelling on neighborhoods and hospitals, citing graphic content.
“Facebook always tells us we break the rules, but no one tells us what the rules are,” he added, giving only his first name for fear of reprisals.
In Afghanistan, many users literally cannot understand Facebook’s rules. According to an internal report in January, Facebook did not translate the site’s hate speech and misinformation pages into Dari and Pashto, the two most common languages in Afghanistan, where English is not widely understood.
When Afghan users try to flag posts as hate speech, the drop-down menus appear only in English. So does the Community Standards page. The site also doesn’t have a bank of hate speech terms, slurs and code words in Afghanistan used to moderate Dari and Pashto content, as is typical elsewhere. Without this local word bank, Facebook can’t build the automated filters that catch the worst violations in the country.
When it came to looking into the abuse of domestic workers in the Middle East, internal Facebook documents acknowledged that engineers primarily focused on posts and messages written in English. The flagged-words list did not include Tagalog, the major language of the Philippines, where many of the region’s housemaids and other domestic workers come from.
In much of the Arab world, the opposite is true — the company over-relies on artificial-intelligence filters that make mistakes, leading to “a lot of false positives and a media backlash,” one document reads. Largely unskilled human moderators, in over their heads, tend to passively field takedown requests instead of screening proactively.
Sophie Zhang, a former Facebook employee-turned-whistleblower who worked at the company for nearly three years before being fired last year, said contractors in Facebook’s Ireland office complained to her they had to depend on Google Translate because the company did not assign them content based on what languages they knew.
Facebook outsources most content moderation to giant companies that enlist workers far afield, from Casablanca, Morocco, to Essen, Germany. The firms don’t sponsor work visas for the Arabic teams, limiting the pool to local hires in precarious conditions — mostly Moroccans who seem to have overstated their linguistic capabilities. They often get lost in the translation of Arabic’s 30-odd dialects, flagging inoffensive Arabic posts as terrorist content 77 percent of the time, one document said.
“These reps should not be fielding content from non-Maghreb region, however right now it is commonplace,” another document reads, referring to the region of North Africa that includes Morocco. The file goes on to say that the Casablanca office falsely claimed in a survey it could handle “every dialect” of Arabic. But in one case, reviewers incorrectly flagged a set of Egyptian dialect content 90 percent of the time, a report said.
Iraq ranks highest in the region for its reported volume of hate speech on Facebook. But among reviewers, knowledge of Iraqi dialect is “close to non-existent,” one document said.
“Journalists are trying to expose human rights abuses, but we just get banned,” said one Baghdad-based press freedom activist, who spoke on condition of anonymity for fear of reprisals. “We understand Facebook tries to limit the influence of militias, but it’s not working.”
Linguists described Facebook’s system as flawed for a region with a vast diversity of colloquial dialects that Arabic speakers transcribe in different ways.
“The stereotype that Arabic is one entity is a major problem,” said Enam Al-Wer, professor of Arabic linguistics at the University of Essex, citing the language’s “huge variations” not only between countries but class, gender, religion and ethnicity.
Despite these problems, moderators are on the front lines of what makes Facebook a powerful arbiter of political expression in a tumultuous region.
Although the documents from Haugen predate this year’s Gaza war, episodes from that 11-day conflict show how little has been done to address the problems flagged in Facebook’s own internal reports.
Activists in Gaza and the West Bank lost their ability to livestream. Whole archives of the conflict vanished from newsfeeds, a primary portal of information for many users. Influencers accustomed to tens of thousands of likes on their posts saw their outreach plummet when they posted about Palestinians.
“This has restrained me and prevented me from feeling free to publish what I want for fear of losing my account,” said Soliman Hijjy, a Gaza-based journalist whose aerials of the Mediterranean Sea garnered tens of thousands more views than his images of Israeli bombs — a common phenomenon when photos are flagged for violating community standards.
During the war, Palestinian advocates submitted hundreds of complaints to Facebook, often leading the company to concede error and reinstate posts and accounts.
In the internal documents, Facebook reported it had erred in nearly half of all Arabic language takedown requests submitted for appeal.
“The repetition of false positives creates a huge drain of resources,” it said.
In announcing the reversal of one such Palestinian post removal last month, Facebook’s semi-independent oversight board urged an impartial investigation into the company’s Arabic and Hebrew content moderation. It called for improvement in its broad terrorism blacklist to “increase understanding of the exceptions for neutral discussion, condemnation and news reporting,” according to the board’s policy advisory statement.
Facebook’s internal documents also stressed the need to “enhance” algorithms, enlist more Arab moderators from less-represented countries and restrict them to where they have appropriate dialect expertise.
“With the size of the Arabic user base and potential severity of offline harm … it is surely of the highest importance to put more resources to the task to improving Arabic systems,” said the report.
But the company also lamented that “there is not one clear mitigation strategy.”
Meanwhile, many across the Middle East worry the stakes of Facebook’s failings are exceptionally high, with potential to widen long-standing inequality, chill civic activism and stoke violence in the region.
“We told Facebook: Do you want people to convey their experiences on social platforms, or do you want to shut them down?” said Husam Zomlot, the Palestinian envoy to the United Kingdom, who recently discussed Arabic content suppression with Facebook officials in London. “If you take away people’s voices, the alternatives will be uglier.”


Arab and Middle Eastern Journalists Association launches awards for exceptional reporting

Arab and Middle Eastern Journalists Association launches awards for exceptional reporting
Updated 11 August 2022

Arab and Middle Eastern Journalists Association launches awards for exceptional reporting

Arab and Middle Eastern Journalists Association launches awards for exceptional reporting
  • Jury panel includes award-winning reporters from NPR, Washington Post, CNN, MSNBC and NYU Journalism School
  • Each winner will receive a $500 cash prize

DUBAI: The Arab and Middle Eastern Journalists Association has launched a series of awards to highlight exceptional work by and about Arab, Middle Eastern and North African communities.
“Promoting accurate and nuanced coverage of the Middle East and North Africa regions and people is at the core of our mission,” said Hoda Osman, AMEJA president.
“We’re excited to launch the AMEJA awards so we can lift up exceptional news coverage by journalists working tirelessly to get the story right.”
The awards program includes three awards: Best coverage of the MENA region; best coverage of MENA immigrant and heritage communities in North America; and the Walid El-Gabry Memorial Award, named after one of AMEJA’s founders to recognize the work of an AMEJA member.
Each winner will receive a $500 cash prize.
The first two awards are open to all journalists.
Entries will be judged by a jury panel, including Mohamad Bazzi, NYU journalism professor and director of the Kevorkian Center for Near Eastern Studies; Nima Elbagir, CNN chief international investigative correspondent; Leila Fadel, host of NPR’s Morning Edition; Kareem Fahim, Middle East bureau chief for The Washington Post; Ayman Mohyeldin, MSNBC host of the show “Ayman”; and Jason Rezaian, columnist at The Washington Post and host of the 544 Days podcast.
The Walid El-Gabry Memorial Award will be voted on by AMEJA’s members.
AMEJA is accepting submissions until Aug. 28. To be eligible, the work must have been published, in English, between Jan. 1, 2021, and Aug. 1, 2022. Entries can be submitted in any format from print to podcasts.
Winners will be announced in the fall of this year.


Facebook hands in private data to police in abortion case against teen

Facebook hands in private data to police in abortion case against teen
Updated 11 August 2022

Facebook hands in private data to police in abortion case against teen

Facebook hands in private data to police in abortion case against teen
  • Authorities obtained incriminatory messages between the mother and the daughter after they approached Facebook with a search warrant

LONDON: Facebook is under intense scrutiny after handing in private messages of a 17-year-old girl accused of crimes relating to an abortion to Nebraska police.

The teenager is accused, along with her mother, of having broken the law that prohibits abortion after 20 weeks. According to court files, the teenager miscarried at 23 weeks of pregnancy and secretly buried the fetus with her mother’s help.

The two were charged in July with allegedly removing, concealing or abandoning a dead human body, concealing the death of another person and false reporting.

Authorities obtained incriminatory messages between the mother and daughter after they approached Facebook with a search warrant.

Facebook reportedly had the option of challenging the court’s decision but chose to provide police access to the teen’s direct messages instead. The teenager is currently facing three criminal charges as a result of using an abortion pill purchased online and burying the unborn fetus.

“Nothing in the valid warrants we received from local law enforcement in early June, prior to the Supreme Court decision, mentioned abortion. The warrants concerned charges related to a criminal investigation and court documents indicate that police at the time were investigating the case of a stillborn baby who was burned and buried, not a decision to have an abortion,” Meta Spokesperson Andy Stone said in a statement.

This case represents one of the first instances in which a person’s social media activity has been used against them in a state where access to abortion is restricted, and it is perceived as a stab in the back after tech companies vowed to protect users in the wake of the US Supreme Court’s overturning of Roe v. Wade.

The news comes just a few weeks after Meta CEO Mark Zuckerberg pledged to “expand encryption across the platform in an effort to keep people safe.” Meta also said it would offer financial assistance to employees having to travel to a different state to seek an abortion.


Google opposes Facebook-backed proposal for self-regulatory body in India - sources

Google opposes Facebook-backed proposal for self-regulatory body in India - sources
Updated 11 August 2022

Google opposes Facebook-backed proposal for self-regulatory body in India - sources

Google opposes Facebook-backed proposal for self-regulatory body in India - sources
  • India wants a panel to review complaints about content decisions
  • Google says self-regulatory system sets bad precedent - sources

NEW DELHI: Google has grave reservations about developing a self-regulatory body for the social media sector in India to hear user complaints, though the proposal has support from Facebook and Twitter, sources with knowledge of the discussions told Reuters.
India in June proposed appointing a government panel to hear complaints from users about content moderation decisions, but has also said it is open to the idea of a self-regulatory body if the industry is willing.
The lack of consensus among the tech giants, however, increases the likelihood of a government panel being formed — a prospect that Meta Platforms Inc’s Facebook and Twitter are keen to avoid as they fear government and regulatory overreach in India, the sources said.
At a closed-door meeting this week, an executive from Alphabet Inc’s Google told other attendees the company was unconvinced about the merits of a self-regulatory body. The body would mean external reviews of decisions that could force Google to reinstate content, even if it violated Google’s internal policies, the executive was quoted as saying.
Such directives from a self-regulatory body could set a dangerous precedent, the sources also quoted the Google executive as saying.
The sources declined to be identified as the discussions were private.
In addition to Facebook, Twitter and Google, representatives from Snap Inc. and popular Indian social media platform ShareChat also attended the meeting. Together, the companies have hundreds of millions of users in India.
Snap and ShareChat also voiced concern about a self-regulatory system, saying the matter requires much more consultation including with civil society, the sources said.
Google said in a statement it had attended a preliminary meeting and is engaging with the industry and the government, adding that it was “exploring all options” for a “best possible solution.”
ShareChat and Facebook declined to comment. The other companies did not respond to Reuters requests for comment.

THORNY ISSUE
Self-regulatory bodies to police content in the social media sector are rare, though there have been instances of cooperation. In New Zealand, big tech companies have signed a code of practice aimed at reducing harmful content online.
Tension over social media content decisions has been a particularly thorny issue in India. Social media companies often receive takedown requests from the government or remove content proactively. Google’s YouTube, for example, removed 1.2 million videos in the first quarter of this year that were in violation of its guidelines, the highest in any country in the world.
India’s government is concerned that users upset with decisions to have their content taken down do not have a proper system to appeal those decisions and that their only legal recourse is to go to court.
Twitter has faced backlash after it blocked accounts of influential Indians, including politicians, citing violation of its policies. Twitter also locked horns with the Indian government last year when it declined to comply fully with orders to take down accounts the government said spread misinformation.
An initial draft of the proposal for the self-regulatory body said the panel would have a retired judge or an experienced person from the field of technology as chairperson, as well as six other individuals, including some senior executives at social media companies.
The panel’s decisions would be “binding in nature,” stated the draft, which was seen by Reuters.
Western tech giants have for years been at odds with the Indian government, arguing that strict regulations are hurting their business and investment plans. The disagreements have also strained trade ties between New Delhi and Washington.
US industry lobby groups representing the tech giants believe a government-appointed review panel raises concern about how it could act independently if New Delhi controls who sits on it.
The proposal for a government panel was open to public consultation until early July. No fixed date for implementation has been set.


Saudi Arabia to host Arab Radio and Television Festival

Saudi Arabia to host Arab Radio and Television Festival
Updated 11 August 2022

Saudi Arabia to host Arab Radio and Television Festival

Saudi Arabia to host Arab Radio and Television Festival
  • Festival running from Nov. 7 to Nov. 10 in Riyadh

RIYADH: Hundreds of media officials are expected at the 22nd edition of the Arab Radio and Television Festival, which will be hosted in Saudi Arabia.

Running from Nov. 7 to Nov. 10 in Riyadh, more than 1,000 media professionals are expected at the four-day event.

Activities will include a broad selection of workshops, discussions and competitions based on the broadcast industry.

The festival, organized by the Saudi Broadcasting Authority, will also have representatives from media organizations including World Broadcasting Unions, European Broadcasting Union, Asia-Pacific Broadcasting Union, African Union of Broadcasting, Asia-Pacific Institute for Broadcasting Development, China Global Television Network, International Telecommunication Union and the Mediterranean Center for Audiovisual Communication.

Saudi Arabia’s hosting of the festival, considered one of the most prominent media forums, gives a nod to its importance in the Arab and Islamic worlds as well as efforts to push for cultural transformation the Kingdom is witnessing, state news agency SPA reported.


All you need to know about Saudi Arabia’s new social media influencer permit

All you need to know about Saudi Arabia’s new social media influencer permit
Updated 11 August 2022

All you need to know about Saudi Arabia’s new social media influencer permit

All you need to know about Saudi Arabia’s new social media influencer permit
  • Kingdom’s media regulator says new law to take effect from October, with all social media influencers affected

LONDON: As more Saudis connect through their social media profiles and even begin to profit from these platforms, the Kingdom has launched a new licensing system to properly monitor the influencer industry.

From early October, every Saudi and non-Saudi content creator in the Kingdom who earns revenue through advertising on social media must first apply for an official permit from the General Commission for Audiovisual Media (GCAM).

For a fee of SR15,000 (roughly $4,000), content creators will receive a permit lasting three years, during which time they can work with as many private entities as they wish and promote any product or service, as long as it does not violate the Kingdom’s laws or values.
 

The incoming influencer license “is not a permit to censor or to block,” Esra Assery, CEO at GCAM, told Arab News. “It’s more of a permit to enable the maturity of the sector. We want to help those individuals grow, but grow in a professional way so they can make a career out of (social media revenue).”

The new regulations are being touted as legal protections, both for influencers and businesses wishing to advertise with them, so that rates and contractual obligations are standardized across the industry.

“The market is so unregulated,” said Assery. “We’re not against influencers or those individuals. Actually, we want to enable them. If you check out the new bylaw, it protects them also, because the bylaw regulates their relationship with the advertisers.”
 

Esra Assery, CEO at Saudi Arabia's General Commission for Audiovisual Media. (Supplied)

Currently, anyone in Saudi Arabia is able to advertise on social media and earn money from deals with private entities — with payments per post climbing into the thousands of riyals, depending on the number of followers an influencer can reach.

Concern has been expressed that introducing permits and regulations will undermine how much money influencers can make and might even constitute censorship. However, GCAM insists the permits are designed to ensure transparency between influencers and their clients.

Saudi influencers, whether based in the Kingdom or abroad, must apply for the permit if they wish to work with a brand — local or international. However, non-Saudi residents in the country must follow a different track.

After applying to the Ministry of Investment for a permit to work in the country, they can then apply for an influencer permit through GCAM. However, non-Saudi residents must be represented by specific advertising agencies.

“While some influencers may focus on the short-term loss of paying the license fee, there is a huge benefit to licensing coming in as it legitimizes the sector on a national level,” Jamal Al-Mawed, founder and managing director of Gambit Communications, told Arab News.

“This is crucial in the influencer industry as it has been a bit of a wild west for marketing in the past, with no clear benchmarking for rates or contracts.”

Al-Mawed said that the new measures can protect brands that are susceptible to fraud “when they pay huge budgets to influencers who are buying fake followers and fake engagements. This creates a vicious circle, as hard-working content creators are undermined by the bad apples.”

Although the new license is unlikely to solve every issue overnight, “it does create a foundation for more professionalism and accountability,” Al-Mawed added.

In June, non-Saudi residents and visitors to the Kingdom were prohibited from posting ads on social media without a license. Those who ignore the ruling face a possible five-year prison sentence and fines of up to SR5 million.

GCAM announced the ban after finding “violations by numerous non-Saudi advertisers, both residents and visitors, on social media platforms.”

“After checking their data, it was found that they had committed systemic violations, including lack of commercial registrations and legal licenses, and they are not working under any commercial entity or foreign investment license,” the commission said at the time.

Now, with a regulated license, such violations will be easier to monitor and the sector will be better regulated to ensure full transparency.
 

Businesses such as bakeries or hair salons that hold social media accounts and advertise their own products or services are not covered by the prohibition. (Shutterstock image)

Although Saudi influencers will be able to hold full-time jobs while earning on the side through promotional campaigns on their social media profiles, the law states that non-Saudis can work only in one specific role while residing in the Kingdom.

However, the system does not apply to businesses and entities — such as bakeries or hair salons — that hold social media accounts and advertise their own products or services on these platforms. Only individuals are affected by the new law.

There are certain exceptions, however, such as individuals who have been invited to the country by a ministry or government entity in order to perform, including musicians and entertainers.

With the rise of social media over the past decade, content creators and so-called influencers with thousands of followers on Instagram, TikTok, Snapchat and other platforms have drawn audiences away from traditional outlets, such as television, newspapers and magazines, to new and largely unregulated media.
 

Sensing the shift in content consumption, advertisers have followed the herd. Crystal-blue waters caressing white, sandy beaches at luxury resorts and scrumptious feasts at the finest restaurants are now commonplace on influencer profiles as businesses rush to take advantage of more “natural-feeling” product placement.

However, regulators have struggled to keep up with this rapid transformation, leaving the process open to legal disputes, exploitation and abuse. That is why authorities elsewhere in the world have also been exploring influencer permits.

Dubai, widely seen as the influencer hub of the Middle East, is among them.

In 2018, the UAE’s National Media Council launched a new electronic media regulation system, which required social media influencers to obtain a license to operate in the country.

The cost of the annual license is 15,000 AED (roughly $4,000). Those who fail to obtain or renew the license can face penalties including a fine of up to 5,000 AED, a verbal or official warning, and even closure of their social media accounts.

The rules apply to influencers visiting the UAE as well. They must either have a license or be signed up with an NMC-registered influencer agency to operate in the country.

With Saudi Arabia progressing in the entertainment and creative industries, the introduction of the license is viewed as a step in the right direction.

“It’s great news for the industry,” said Al-Mawed. “When someone is licensed by the government to offer their services, that gives them a level of safety and trust and can help filter out the scammers who prefer to fly under the radar.”

 

Druze: the great survivors
How the world's most secretive faithhas endured for a thousand years
Enter
keywords