Meta’s Oversight Board issued 20 decisions in its first year. Is that enough?

During the period covered by the report, the board received more than a million appeals, issued 20 decisions — 14 of which overturned Meta’s own rulings — and made 86 recommendations to the company. (AFP)
1 / 2
During the period covered by the report, the board received more than a million appeals, issued 20 decisions — 14 of which overturned Meta’s own rulings — and made 86 recommendations to the company. (AFP)
During the period covered by the report, the board received more than a million appeals, issued 20 decisions — 14 of which overturned Meta’s own rulings — and made 86 recommendations to the company. (AFP)
2 / 2
During the period covered by the report, the board received more than a million appeals, issued 20 decisions — 14 of which overturned Meta’s own rulings — and made 86 recommendations to the company. (AFP)
Short Url
Updated 02 July 2022

Meta’s Oversight Board issued 20 decisions in its first year. Is that enough?

Meta’s Oversight Board issued 20 decisions in its first year. Is that enough?
  • Board shows commitment to bringing about positive change, and to lobbying Meta to do the same, But is this enough?
  • The first annual report from the independent review body, which is funded by Meta, explains the reasoning behind its 20 rulings and the 86 recommendations it has made

DUBAI: Meta’s Oversight Board has published its first annual report. Covering the period from October 2020 to December 2021, it describes the work the board has carried out in relation to how Meta, the company formerly known as Facebook, treats its users and their content, and the work that remains to be done.

The board is an independent body set up and funded by Meta to review content and content-moderation policies on Facebook and Instagram. It considers concerns raised by Meta itself and by users who have exhausted the company’s internal appeals process. It can recommend policy changes and make decisions that overrule the company’s decisions.

During the period covered by the report, the board received more than a million appeals, issued 20 decisions — 14 of which overturned Meta’s own rulings — and made 86 recommendations to the company.

“Through our first Annual Report, we’re able to demonstrate the significant impact the board has had on pushing Meta to become more transparent in its content policies and fairer in its content decisions,” Thomas Hughes, the board’s director, told Arab News.

Through our first Annual Report, we’re able to demonstrate the significant impact the board has had on pushing Meta to become more transparent in its content policies and fairer in its content decisions.

Thomas Hughes, Director of Meta’s Oversight Board

One of the cases the board considered concerns a post that appeared on media organization Al Jazeera Arabic’s verified page in May 2021, and which was subsequently shared by a Facebook user in Egypt. It consisted of Arabic text and a photo showing two men, their faces covered, who were wearing camouflage and headbands featuring the insignia of the Palestinian Al-Qassam Brigades.

The text read: “The resistance leadership in the common room gives the occupation a respite until 6 p.m. to withdraw its soldiers from Al-Aqsa Mosque and Sheikh Jarrah neighborhood, otherwise he who warns is excused. Abu Ubaida – Al-Qassam Brigades military spokesman.”

The user who shared the post commented on it in Arabic by adding the word “ooh.”

Meta initially removed the post because Al-Qassam Brigades and its spokesperson, Abu Ubaida, are designated under Facebook’s Dangerous Individuals and Organizations community standard. However, it restored the post based on a ruling by the board.

The board said in its report that while the community standard policy clearly prohibits “channeling information or resources, including official communications, on behalf of a designated entity,” it also noted there is an exception to this rule for content that is published as “news reporting.” It added that the content in this case was a “reprint of a widely republished news report” by Al Jazeera and did not include any major changes other than the “addition of the non-substantive comment, ‘ooh.’”

Meta was unable to explain why two of its reviewers judged the content to be in violation of the platform’s content policies but noted that moderators are not required to record their reasoning for individual content decisions.




Meta has agreed to our call to ensure all updates to its policies
are translated into all languages, says Thomas Hughes, Director of Meta’s Oversight Board

According to the report, the case also highlights the board’s objective of ensuring users are treated fairly because “the post, consisting of a republication of a news item from a legitimate outlet, was treated differently from content posted by the news organization itself.”

Based on allegations that Facebook was censoring Palestinian content, the board asked the platform a number of questions, including whether it had received any requests from Israel to remove content related to the 2021 Israeli-Palestinian conflict.

In response, Facebook said that it had not received any valid, legal requests from a government authority related to the user’s content in this case. However, it declined to provide any other requested information.

The board therefore recommended an independent review of these issues, as well as greater transparency about how Facebook responds to government requests.

“Following recommendations we issued after a case decision involving Israel/Palestine, Meta is conducting a review, using an independent body, to determine whether Facebook’s content-moderation community standards in Arabic and Hebrew are being applied without bias,” said Hughes.

In another case, the Oversight Board overturned Meta’s decision to remove an Instagram post by a public account that allows the discussion of queer narratives in Arabic culture. The post consisted of a series of pictures with a caption, in Arabic and English, explaining how each picture illustrated a different word that can be used in a derogatory way in the Arab world to describe men with “effeminate mannerisms.”

Meta is conducting a review, using an independent body, to determine whether Facebook’s content-moderation community standards in Arabic and Hebrew are being applied without bias.

Thomas Hughes, Director of Meta’s Oversight Board

Meta removed the content for violating its hate speech policies but restored it when the user appealed. However, it later removed the content a second time for violating the same policies, after other users reported it.

According to the board, this was a “clear error, which was not in line with Meta’s hate speech policy.” It said that while the post does contain terms that are considered slurs, it is covered by an exception covering speech that is “used self-referentially or in an empowering way,” and also an exception that allows the quoting of hate speech to “condemn it or raise awareness.”

Each time the post was reported, a different moderator reviewed it. The board was, therefore, “concerned that reviewers may not have sufficient resources in terms of capacity or training to prevent the kind of mistake seen in this case.”

Hughes said: “As demonstrated in this report, we have a track record of success in getting Meta to consider how it handles posts in Arabic.

“We’ve succeeded in getting Meta to ensure its community standards are translated into all relevant languages, prioritizing regions where conflict or unrest puts users at most risk of imminent harm. Meta has also agreed to our call to ensure all updates to its policies are translated into all languages.”

These cases illustrate the board’s commitment to bringing about positive change, and to lobbying Meta to do the same, whether that means restoring an improperly deleted post or agreeing to an independent review of a case. But is this enough?

This month, Facebook failed once again when it faced a test of how capable it is of detecting obviously unacceptable violent hate speech. The test was carried out by nonprofit groups Global Witness and Foxglove, which created 12 text-based adverts which featured dehumanizing hate speech that called for the murder of people belonging to Ethiopia’s three main ethnic groups — the Amhara, the Oromo and the Tigrayans — and submitted them to the platform. Despite the clearly objectionable content, Facebook’s systems approved the adverts for publication.

In March, Global Witness ran a similar test using adverts about Myanmar that used similar hate speech. Facebook also failed to detect those. The ads were not actually published on Facebook because Global Witness alerted Meta to the test and the violations the platform had failed to detect.

In another case, the Oversight Board upheld Meta’s initial decision to remove a post alleging the involvement of ethnic Tigrayan civilians in atrocities carried out in the Amhara region of Ethiopia. However, Meta restored the post after a user appealed to the board, so the company had to once again remove the content from the platform.

In November 2021, Meta announced that it had removed a post by Ethiopia’s prime minister, Abiy Ahmed Ali, in which he urged citizens to rise up and “bury” rival Tigray forces who threatened the country’s capital. His verified Facebook page remains active, however, and has 4.1 million followers.

In addition to its failures over content relating to Myanmar and Ethiopia, Facebook has long been accused by rights activists of suppressing posts by Palestinians.

“Facebook has suppressed content posted by Palestinians and their supporters speaking out about human rights issues in Israel and Palestine,” said Deborah Brown, a senior digital rights researcher and advocate at Human Rights Watch.

During the May 2021 Israeli-Palestinian conflict, Facebook and Instagram removed content posted by Palestinians and posts that expressed support for Palestine. HRW documented several instances of this, including one in which Instagram removed a screenshot of the headlines and photos from three New York Times op-ed articles, to which the user had added a caption that urged Palestinians to “never concede” their rights.

In another instance, Instagram removed a post that included a picture of a building and the caption: “This is a photo of my family’s building before it was struck by Israeli missiles on Saturday, May 15, 2021. We have three apartments in this building.”

Digital rights group Sada Social said that in May 2021 alone it documented more than 700 examples of social media networks removing or restricting access to Palestinian content.

According to HRW, Meta’s acknowledgment of errors that were made and attempts to correct some of them are insufficient and do not address the scale and scope of reported content restrictions, nor do they adequately explain why they occurred in the first place.

Hughes acknowledged that some of the commitments to change made by Meta will take time to implement but added that it is important to ensure that they are “not kicked into the long grass and forgotten about.”

Meta admitted this year in its first Quarterly Update on the Oversight Board that it takes time to implement recommendations “because of the complexity and scale associated with changing how we explain and enforce our policies, and how we inform users of actions we’ve taken and what they can do about it.”

In the meantime, Hughes added: “The Board will continue to play a key role in the collective effort by companies, governments, academia and civil society to shape a brighter, safer digital future that will benefit people everywhere.”

However, the Oversight Board only reviews cases reported by users or by Meta itself. According to some experts, the issues with Meta go far beyond the current scope of the board’s mandate.

“For an oversight board to address these issues (Russian interference in the US elections), it would need jurisdiction not only over personal posts but also political ads,” wrote Dipayan Ghosh, co-director of the Digital Platforms and Democracy Project at the Mossavar-Rahmani Center for Business and Government at the Harvard Kennedy School.

“Beyond that, it would need to be able to not only take down specific pieces of content but also to halt the flow of American consumer data to Russian operatives and change the ways that algorithms privilege contentious content.”

He went on to suggest that the board’s authority should be expanded from content takedowns to include “more critical concerns” such as the company’s data practices and algorithmic decision-making because “no matter where we set the boundaries, Facebook will always want to push them. It knows no other way to maintain its profit margins.”


Inspiring ‘Passion for Reading’ and Fostering Cultural Exchange: Introducing the Media Partnership between the Riyadh International Book Fair 2022 and SRMG

Inspiring ‘Passion for Reading’ and Fostering Cultural Exchange: Introducing the Media Partnership between the Riyadh International Book Fair 2022 and SRMG
Updated 30 September 2022

Inspiring ‘Passion for Reading’ and Fostering Cultural Exchange: Introducing the Media Partnership between the Riyadh International Book Fair 2022 and SRMG

Inspiring ‘Passion for Reading’ and Fostering Cultural Exchange: Introducing the Media Partnership between the Riyadh International Book Fair 2022 and SRMG

RIYADH: For the second year in a row, the Saudi Research and Media Group - SRMG announced its active participation in the Riyadh International Book Fair 2022, as the Official Media Partner. This renewed partnership between the two entities aims at inspiring and motivating readers to increase their passion for reading, in addition to fostering cultural exchange.

During the Book Fair - held in the Saudi capital Riyadh, from September 29 to October 8, 2022 - the Group will provide extensive coverage of the fair and its events, coupled with interactive programs and activities, through more than 30 SRMG participating media outlets and platforms – thus inspiring audiences and interested prospects, in several languages.

And for the first time, Arabic editions of global publications and titles from Raff Publishing will also be revealed, including a variety of books.

In this context, Jomana R. Alrashid, CEO of SRMG, said: “The Riyadh International Book Fair is a key cultural event and falls in line with the Group’s commitment to supporting knowledge economies and stimulating creativity and innovation, in KSA and beyond. Therefore, our renewed partnership highlights the on-going role SRMG media outlets and platforms play in providing unique and distinctive coverage of the fair and its visitors.”

The fair, in its current edition, will witness the reveal by SRMG’s ”Raff Publishing” of a variety of books, representing its first series of publications, including Arabic editions of global publications and book titles. These will come to further enrich Arabic content, through the works by prominent Saudi and Arab writers. In its designated space - located next to the VIP entrance of the Fair – “Raff Publishing” will also offer special events, unique and interactive digital experiences. The publishing house will also highlight its collabs with emerging and established writers.

On its part, SRMG’s “Manga Alarabia” will present several initiatives, including specialized workshops, an interactive photo booth, and a selection of its most prominent publications, in addition to offering activities for children in the Book Fair’s dedicated pavilion.

As far as SRMG’s “Thmanyah”– the market leader in podcasts and documentary film production – is concerned, it will have its own dedicated studio to conduct interviews with VIPs and distinguished guests.

It is noteworthy that the annual Riyadh International Book Fair is one of the most prominent Arab book fairs in terms of number of visitors, sales volume, and the diversity of its cultural programs; as well as the participation of the most prestigious local, regional and international publishing houses. The fair also represents a platform for companies and individuals working &/or interested in the knowledge, literature, publishing and translation sectors, to present their works, books and offerings.


Spotify’s new report delves into how UAE’s Gen Zs are driving culture

Spotify’s new report delves into how UAE’s Gen Zs are driving culture
Updated 30 September 2022

Spotify’s new report delves into how UAE’s Gen Zs are driving culture

Spotify’s new report delves into how UAE’s Gen Zs are driving culture
  • Annual Culture Next report reveals the behaviors, attitudes and mindsets of GenZs in the UAE

DUBAI: Spotify has released the UAE edition of its annual global culture and trends report, Culture Next.

In the fourth edition of the report, the second to feature the UAE, Spotify delves deeper into the behaviors, attitudes and mindsets of their largest audience segment, Generation Z (aged 15 to 25), and how they differ from Generation Y, known as millennials (26 to 40).

In 2021, Gen Zs globally streamed music more often than they used any other media (including videos, games, and TV), and shared more Spotify playlists and engaged in more group listening sessions than any other generation, according to the report.

In the first quarter of 2022 alone, 18 to 24-year-olds have played more than 578 billion minutes of music on Spotify — more than any other segment, and roughly 16 billion more minutes than millennials, or 25 to 34-year-olds, around the world.

“Audio has always been part of our lives,” Mark Abou Jaoude, Spotify’s head of music in the Middle East and North Africa, told Arab News.

“Streaming is being seen more and more as a key driver for discovery and the formation of a global community that identifies with one another through audio. It’s a way of self-expression and it's screenless,” he added.

Video, as a format, has grown in popularity in recent times, spurred by short-form video such as that on TikTok and Instagram’s Reels.

Jaoude, however, stresses the importance of audio, particularly for Gen Zs. “A video with no audio is hard to comprehend, for example, but a pure audio piece is not. Audio enriches storytelling,” he said.

The report highlights key differences between Gen Zs and millennials, with the former having gone from an “emerging” generation to the “center stage of culture.”

Firstly, while both generations are stressed, Gen Zs are more so. “Millennials were raised in a boom, Zs in a bust,” said Jaoude. They have experienced significant downturns associated with the crash of 2008 and later COVID-19, which they experienced mostly as adults, he explained.

In this environment, they are turning to audio as a safe space. Fifty-nine percent of 18 to 24-year-olds in the UAE said they turn to podcasts to get answers to hard or personal questions before talking to their families about it, and 66 percent said they listen to podcasts to inform the conversations they have with their friends.

Moreover, according to 68 percent of Gen Zs in the UAE, audio helps them understand themselves better, and 80 percent said it allows them to explore different sides of their personality.

All of this means that for Gen Zs, audio has always been a part of their lives, and they use it for everything from creativity and self-expression, to discovering aspects of their own personality.

The second factor setting Gen Zs apart is that they are “the most racially and culturally diverse generation and therefore they demand this diversity be reflected through their lifestyle, the brands they engage with, social media and the audio they consume,” according to Jaoude.

Self-expression and creativity are core to this generation and so, “they lean into music, artists, podcasts, and playlists to shape the stories they tell about themselves,” he added.

FASTFACTS

66 percent listen to podcasts to inform the conversations they have with their friends.

82 percent said they had learned something about themselves by looking back on their listening habits.

74 percent believe that their listening habits tell a story about who they are.

78 percent listen to music from movies or shows because they like to feel like they are a character in the story.

71 percent like listening to and watching media from earlier decades because it reminds them of when things were simpler.

75 percent like it when brands bring back old aesthetic styles.

72 percent love it when brands produce retro products or content.

For instance, 82 percent in the UAE said they had learned something about themselves by looking back on their listening habits, and 74 percent believe that their listening habits tell a story about who they are.

It might appear that they are self-involved, but according to the report, they are driving the “main-character energy” trend, in which people use social media or digital audio to make themselves feel like the center of attention. This is evident in the popularity of playlists like “My Life is a Movie” and ones containing “POV” in the title.

Seventy-eight percent of Gen Zs in the UAE listen to music from movies or shows because “they like to feel like they are a character in the story,” according to Jaoude, and 79 percent of all Spotify playlists globally with “POV” in the title were created by Gen Zs.

Jaoude said: “They are experts in structuring and communicating their individual stories through playlists. They create their own playlists on Spotify and even use collaborative playlist features to ask their friends and community to exchange songs.”

While millennials are known for being nostalgic, Gen Zs go even further down memory lane, he added. They are “reinventing nostalgia” by filtering pop culture “through a contemporary perspective to access and inspire something new and unique to them,” he said.

Millennials are nostalgic for the times they have lived through; Gen Zs, on the other hand, are nostalgic for eras that offer some form of reprieve from current times, which they find stressful and anxiety-inducing.

“Among Zs, the past is all fuel for the future — and that is true for more than music,” Jaoude said.

It is why 71 percent of Gen Zs in UAE said they like listening to and watching media from earlier decades — because it reminds them of when things were simpler, and 75 percent like it when brands bring back old aesthetic styles, while 72 percent love it when brands produce retro products or content.

“You will see that movement in today’s fashion and the sound of music; there’s a lot of borrowing from previous eras and artists add their personal flair or vision to that sound,” said Jaoude.

Gen Zs’ unique problems, and habits, provide an untapped opportunity for marketers. As Jaoude said: “They are seeking new opportunities to share themselves through audio — and looking to brands to help make it happen.”

Forty-nine percent in the UAE said they like being able to select the ad they listen to on a digital audio streaming service, and more than a third said they like it when they can interact with ads.

For example, Spotify worked with Adidas on the “Nite Jogger” campaign where they created a custom digital experience that gleaned the “sonic traits” of listeners’ nighttime streaming activity to create a custom playlist unique to each individual. The campaign racked up 32.4 million impressions and over 9 million unique visitors.

“While brands of the past may have prioritized keeping an iron grip over their messaging, there’s a huge opportunity to connect with the next generation by handing the reins over to them and allowing them to customize their experience — especially in the space of audio,” said Jaoude.
 


CPJ calls on Iran to investigate if journalists are being targeted by Iranian forces

CPJ calls on Iran to investigate if journalists are being targeted by Iranian forces
Updated 30 September 2022

CPJ calls on Iran to investigate if journalists are being targeted by Iranian forces

CPJ calls on Iran to investigate if journalists are being targeted by Iranian forces
  • The call comes after a journalist in Iraqi Kurdistan was injured during Iranian strikes on the region

DUBAI: Kurdistan 24’s media team came under attack while covering Iranian drone and missile attacks on Iranian-Kurdish opposition parties based in the Kurdistan Region on Sept. 28, according to the broadcaster.

In a statement, the Erbil-based media company said that its correspondent Soran Kamaran was seriously injured, adding: “We reiterate that Kurdistan 24 is continuing its professional coverage of the events. And we hope all sides in the conflict avoid targeting journalists and media workers.”

The cameraman with Kamaran was not hurt, Kurdistan 24’s newsroom manager and anchor, Kovan Izzat, told the Committee to Protect Journalists (CPJ). But, Kamaran was taken to a hospital in Erbil, where he underwent two surgeries, Izzat said.

The media watchdog has called on Iran to investigate if journalists are being targeted by Iranian forces.

“Iran’s drone strikes inevitably cause civilian casualties, including those of journalists documenting the attacks,” said Sherif Mansour, CPJ’s MENA program coordinator. “Iranian and Kurdish authorities must take serious measures to avoid harming civilians and to hold anyone violating international law accountable.”

The Kurdistan Regional Government also strongly condemned the “repetitive (Iranian) violations of the sovereignty of the Kurdistan Region,” reported Kurdistan 24.

At least seven people have been killed and 24 injured as a result of Iran’s attacks, according to Dr. Saman Barzinji, the minister of health for the Kurdistan Regional Government in Iraq.

On Sept. 28, the UN Assistance Mission for Iraq said that “rocket diplomacy is a reckless act with devastating consequences,” adding that “these attacks need to cease immediately.”

CPJ could not find a contact for Kamaran’s family immediately. The organization reached out to the Iranian UN mission for comment but has not received a response.

 


TikTok launches Creator Hub program in UAE and Egypt

TikTok launches Creator Hub program in UAE and Egypt
Updated 29 September 2022

TikTok launches Creator Hub program in UAE and Egypt

TikTok launches Creator Hub program in UAE and Egypt
  • The new initiative aims to identify talented creators and connect them with the right mentors and skill-building experts
  • The annual competition requires creators to produce a creative content idea around a specific theme

DUBAI: TikTok has announced the launch of the inaugural TikTok Creator Hub program in the UAE and Egypt.

The new initiative aims to identify talented creators and connect them with the right mentors and skill-building experts to support and nurture their skills.

The annual competition requires creators to produce a creative content idea around a specific theme.

A group of judges, including top TikTok content creators from across the MENA region, will assess the skills of the creators and provide them with the required learning to help elevate their content, as well as advise them on their career as a creator.

Inspired by the 2022 UN Climate Change Conference of Parties — COP27 — held in Egypt in November, this year’s theme is climate change.

“With the launch of the inaugural edition of TikTok Creator Hub, we aim to generate awareness and advocacy around causes and pressing issues that touch the community, securing a dedicated destination for content creation and conversations focused on the most important societal issues of our time, such as climate change,” said Tarek Abdalla, regional general manager at TikTok Middle East, Africa, Turkiye, Pakistan and South Asia.

The theme also aligns with TikTok’s launch of the #ClimateAction program in support of COP27 in the MENA region, which is a campaign encouraging TikTok users to join the climate conversation.

The TikTok Creator Hub program is divided into three phases, which include online learning modules, a live training session and the judging process to name the winner of the competition.

Once the creators have been shortlisted, TikTok will host a welcome workshop in collaboration with celebrity creators to introduce them to the TikTok Creator Hub concept.

TikTok will also host a live training day, enabling creators to spend one live session with a creator mentor, ahead of their creation of a TikTok focused on climate change, which will be submitted for the judging process.

Lastly, the judges will choose the winning entries, which will be announced in November.

Creators living in the UAE and Egypt who would like to participate can visit the TikTok MENA Creator Hub website MENATikTokCreatorHub.com to register and share a 30-60 second video on why they want to be part of the program for a chance to be selected.

Registration closes on Oct. 10.


Brands blast Twitter for ads next to child pornography accounts

Brands blast Twitter for ads next to child pornography accounts
REUTERS/Florence Lo/Illustration/File Photo
Updated 29 September 2022

Brands blast Twitter for ads next to child pornography accounts

Brands blast Twitter for ads next to child pornography accounts
  • Mazda, Forbes and Dyson are among the brands to suspend their marketing campaigns on the platform

 

Some major advertisers including Dyson, Mazda, Forbes and PBS Kids have suspended their marketing campaigns or removed their ads from parts of Twitter because their promotions appeared alongside tweets soliciting child pornography, the companies told Reuters.

DIRECTV and Thoughtworks also told Reuters late on Wednesday they have paused their advertising on Twitter.

Brands ranging from Walt Disney Co (DIS.N), NBCUniversal (CMCSA.O) and Coca-Cola Co (KO.N) to a children's hospital were among more than 30 advertisers that appeared on the profile pages of Twitter accounts peddling links to the exploitative material, according to a Reuters review of accounts identified in new research about child sex abuse online from cybersecurity group Ghost Data.

Some of tweets include key words related to “rape” and “teens,” and appeared alongside promoted tweets from corporate advertisers, the Reuters review found. In one example, a promoted tweet for shoe and accessories brand Cole Haan appeared next to a tweet in which a user said they were “trading teen/child” content.

“We’re horrified,” David Maddocks, brand president at Cole Haan, told Reuters after being notified that the company’s ads appeared alongside such tweets. “Either Twitter is going to fix this, or we’ll fix it by any means we can, which includes not buying Twitter ads.”

In another example, a user tweeted searching for content of “Yung girls ONLY, NO Boys,” which was immediately followed by a promoted tweet for Texas-based Scottish Rite Children's Hospital. Scottish Rite did not return multiple requests for comment.

In a statement, Twitter spokesperson Celeste Carswell said the company “has zero tolerance for child sexual exploitation” and is investing more resources dedicated to child safety, including hiring for new positions to write policy and implement solutions.

She added that Twitter is working closely with its advertising clients and partners to investigate and take steps to prevent the situation from happening again.

Twitter’s challenges in identifying child abuse content were first reported in an investigation by tech news site The Verge in late August. The emerging pushback from advertisers that are critical to Twitter’s revenue stream is reported here by Reuters for the first time.

Like all social media platforms, Twitter bans depictions of child sexual exploitation, which are illegal in most countries. But it permits adult content generally and is home to a thriving exchange of pornographic imagery, which comprises about 13 percent of all content on Twitter, according to an internal company document seen by Reuters.

Twitter declined to comment on the volume of adult content on the platform.

Ghost Data identified the more than 500 accounts that openly shared or requested child sexual abuse material over a 20-day period this month. Twitter failed to remove more than 70 percent of the accounts during the study period, according to the group, which shared the findings exclusively with Reuters.

Reuters could not independently confirm the accuracy of Ghost Data’s finding in full, but reviewed dozens of accounts that remained online and were soliciting materials for "13+" and “young looking nudes.”

After Reuters shared a sample of 20 accounts with Twitter last Thursday, the company removed about 300 additional accounts from the network, but more than 100 others still remained on the site the following day, according to Ghost Data and a Reuters review.

Reuters then on Monday shared the full list of more than 500 accounts after it was furnished by Ghost Data, which Twitter reviewed and permanently suspended for violating its rules, said Twitter’s Carswell on Tuesday.

In an email to advertisers on Wednesday morning, ahead of the publication of this story, Twitter said it “discovered that ads were running within Profiles that were involved with publicly selling or soliciting child sexual abuse material.”

Andrea Stroppa, the founder of Ghost Data, said the study was an attempt to assess Twitter’s ability to remove the material. He said he personally funded the research after receiving a tip about the topic.

Twitter’s transparency reports on its website show it suspended more than 1 million accounts last year for child sexual exploitation.

It made about 87,000 reports to the National Center for Missing and Exploited Children, a government-funded non-profit that facilitates information sharing with law enforcement, according to that organization's annual report.

“Twitter needs to fix this problem ASAP, and until they do, we are going to cease any further paid activity on Twitter,” said a spokesperson for Forbes.

“There is no place for this type of content online,” a spokesperson for carmaker Mazda USA said in a statement to Reuters, adding that in response, the company is now prohibiting its ads from appearing on Twitter profile pages.

A Disney spokesperson called the content “reprehensible” and said they are “doubling-down on our efforts to ensure that the digital platforms on which we advertise, and the media buyers we use, strengthen their efforts to prevent such errors from recurring.”

A spokesperson for Coca-Cola, which had a promoted tweet appear on an account tracked by the researchers, said it did not condone the material being associated with its brand and said “any breach of these standards is unacceptable and taken very seriously.”

NBCUniversal said it has asked Twitter to remove the ads associated with the inappropriate content.

CODE WORDS

Twitter is hardly alone in grappling with moderation failures related to child safety online. Child welfare advocates say the number of known child sexual abuse images has soared from thousands to tens of millions in recent years, as predators have used social networks including Meta’s Facebook and Instagram to groom victims and exchange explicit images.

For the accounts identified by Ghost Data, nearly all the traders of child sexual abuse material marketed the materials on Twitter, then instructed buyers to reach them on messaging services such as Discord and Telegram in order to complete payment and receive the files, which were stored on cloud storage services like New Zealand-based Mega and US-based Dropbox, according to the group’s report.

A Discord spokesperson said the company had banned one server and one user for violating its rules against sharing links or content that sexualize children.

Mega said a link referenced in the Ghost Data report was created in early August and soon after deleted by the user, which it declined to identify. Mega said it permanently closed the user's account two days later.

Dropbox and Telegram said they use a variety of tools to moderate content but did not provide additional detail on how they would respond to the report.

Still the reaction from advertisers poses a risk to Twitter’s business, which earns more than 90 percent of its revenue by selling digital advertising placements to brands seeking to market products to the service's 237 million daily active users.

Twitter is also battling in court Tesla CEO and billionaire Elon Musk, who is attempting to back out of a $44 billion deal to buy the social media company over complaints about the prevalence of spam accounts and its impact on the business.

A team of Twitter employees concluded in a report dated February 2021 that the company needed more investment to identify and remove child exploitation material at scale, noting the company had a backlog of cases to review for possible reporting to law enforcement.

“While the amount of (child sexual exploitation content) has grown exponentially, Twitter’s investment in technologies to detect and manage the growth has not,” according to the report, which was prepared by an internal team to provide an overview about the state of child exploitation material on Twitter and receive legal advice on the proposed strategies.

“Recent reports about Twitter provide an outdated, moment in time glance at just one aspect of our work in this space, and is not an accurate reflection of where we are today,” Carswell said.

The traffickers often use code words such as “cp” for child pornography and are “intentionally as vague as possible,” to avoid detection, according to the internal documents.

The more that Twitter cracks down on certain keywords, the more that users are nudged to use obfuscated text, which “tend to be harder for (Twitter) to automate against,” the documents said.

Ghost Data’s Stroppa said that such tricks would complicate efforts to hunt down the materials, but noted that his small team of five researchers and no access to Twitter’s internal resources was able to find hundreds of accounts within 20 days.

Twitter did not respond to a request for further comment.