Brands blast Twitter for ads next to child pornography accounts

Brands blast Twitter for ads next to child pornography accounts
A promoted tweet on Twitter app is displayed on a mobile phone near a Twitter logo, in this illustration picture taken Sept. 8, 2022 (REUTERS)
Short Url
Updated 29 September 2022

Brands blast Twitter for ads next to child pornography accounts

Brands blast Twitter for ads next to child pornography accounts
  • Mazda, Forbes and Dyson are among the brands to suspend their marketing campaigns on the platform

 

Some major advertisers including Dyson, Mazda, Forbes and PBS Kids have suspended their marketing campaigns or removed their ads from parts of Twitter because their promotions appeared alongside tweets soliciting child pornography, the companies told Reuters.

DIRECTV and Thoughtworks also told Reuters late on Wednesday they have paused their advertising on Twitter.

Brands ranging from Walt Disney Co (DIS.N), NBCUniversal (CMCSA.O) and Coca-Cola Co (KO.N) to a children's hospital were among more than 30 advertisers that appeared on the profile pages of Twitter accounts peddling links to the exploitative material, according to a Reuters review of accounts identified in new research about child sex abuse online from cybersecurity group Ghost Data.

Some of tweets include key words related to “rape” and “teens,” and appeared alongside promoted tweets from corporate advertisers, the Reuters review found. In one example, a promoted tweet for shoe and accessories brand Cole Haan appeared next to a tweet in which a user said they were “trading teen/child” content.

“We’re horrified,” David Maddocks, brand president at Cole Haan, told Reuters after being notified that the company’s ads appeared alongside such tweets. “Either Twitter is going to fix this, or we’ll fix it by any means we can, which includes not buying Twitter ads.”

In another example, a user tweeted searching for content of “Yung girls ONLY, NO Boys,” which was immediately followed by a promoted tweet for Texas-based Scottish Rite Children's Hospital. Scottish Rite did not return multiple requests for comment.

In a statement, Twitter spokesperson Celeste Carswell said the company “has zero tolerance for child sexual exploitation” and is investing more resources dedicated to child safety, including hiring for new positions to write policy and implement solutions.

She added that Twitter is working closely with its advertising clients and partners to investigate and take steps to prevent the situation from happening again.

Twitter’s challenges in identifying child abuse content were first reported in an investigation by tech news site The Verge in late August. The emerging pushback from advertisers that are critical to Twitter’s revenue stream is reported here by Reuters for the first time.

Like all social media platforms, Twitter bans depictions of child sexual exploitation, which are illegal in most countries. But it permits adult content generally and is home to a thriving exchange of pornographic imagery, which comprises about 13 percent of all content on Twitter, according to an internal company document seen by Reuters.

Twitter declined to comment on the volume of adult content on the platform.

Ghost Data identified the more than 500 accounts that openly shared or requested child sexual abuse material over a 20-day period this month. Twitter failed to remove more than 70 percent of the accounts during the study period, according to the group, which shared the findings exclusively with Reuters.

Reuters could not independently confirm the accuracy of Ghost Data’s finding in full, but reviewed dozens of accounts that remained online and were soliciting materials for "13+" and “young looking nudes.”

After Reuters shared a sample of 20 accounts with Twitter last Thursday, the company removed about 300 additional accounts from the network, but more than 100 others still remained on the site the following day, according to Ghost Data and a Reuters review.

Reuters then on Monday shared the full list of more than 500 accounts after it was furnished by Ghost Data, which Twitter reviewed and permanently suspended for violating its rules, said Twitter’s Carswell on Tuesday.

In an email to advertisers on Wednesday morning, ahead of the publication of this story, Twitter said it “discovered that ads were running within Profiles that were involved with publicly selling or soliciting child sexual abuse material.”

Andrea Stroppa, the founder of Ghost Data, said the study was an attempt to assess Twitter’s ability to remove the material. He said he personally funded the research after receiving a tip about the topic.

Twitter’s transparency reports on its website show it suspended more than 1 million accounts last year for child sexual exploitation.

It made about 87,000 reports to the National Center for Missing and Exploited Children, a government-funded non-profit that facilitates information sharing with law enforcement, according to that organization's annual report.

“Twitter needs to fix this problem ASAP, and until they do, we are going to cease any further paid activity on Twitter,” said a spokesperson for Forbes.

“There is no place for this type of content online,” a spokesperson for carmaker Mazda USA said in a statement to Reuters, adding that in response, the company is now prohibiting its ads from appearing on Twitter profile pages.

A Disney spokesperson called the content “reprehensible” and said they are “doubling-down on our efforts to ensure that the digital platforms on which we advertise, and the media buyers we use, strengthen their efforts to prevent such errors from recurring.”

A spokesperson for Coca-Cola, which had a promoted tweet appear on an account tracked by the researchers, said it did not condone the material being associated with its brand and said “any breach of these standards is unacceptable and taken very seriously.”

NBCUniversal said it has asked Twitter to remove the ads associated with the inappropriate content.

CODE WORDS

Twitter is hardly alone in grappling with moderation failures related to child safety online. Child welfare advocates say the number of known child sexual abuse images has soared from thousands to tens of millions in recent years, as predators have used social networks including Meta’s Facebook and Instagram to groom victims and exchange explicit images.

For the accounts identified by Ghost Data, nearly all the traders of child sexual abuse material marketed the materials on Twitter, then instructed buyers to reach them on messaging services such as Discord and Telegram in order to complete payment and receive the files, which were stored on cloud storage services like New Zealand-based Mega and US-based Dropbox, according to the group’s report.

A Discord spokesperson said the company had banned one server and one user for violating its rules against sharing links or content that sexualize children.

Mega said a link referenced in the Ghost Data report was created in early August and soon after deleted by the user, which it declined to identify. Mega said it permanently closed the user's account two days later.

Dropbox and Telegram said they use a variety of tools to moderate content but did not provide additional detail on how they would respond to the report.

Still the reaction from advertisers poses a risk to Twitter’s business, which earns more than 90 percent of its revenue by selling digital advertising placements to brands seeking to market products to the service's 237 million daily active users.

Twitter is also battling in court Tesla CEO and billionaire Elon Musk, who is attempting to back out of a $44 billion deal to buy the social media company over complaints about the prevalence of spam accounts and its impact on the business.

A team of Twitter employees concluded in a report dated February 2021 that the company needed more investment to identify and remove child exploitation material at scale, noting the company had a backlog of cases to review for possible reporting to law enforcement.

“While the amount of (child sexual exploitation content) has grown exponentially, Twitter’s investment in technologies to detect and manage the growth has not,” according to the report, which was prepared by an internal team to provide an overview about the state of child exploitation material on Twitter and receive legal advice on the proposed strategies.

“Recent reports about Twitter provide an outdated, moment in time glance at just one aspect of our work in this space, and is not an accurate reflection of where we are today,” Carswell said.

The traffickers often use code words such as “cp” for child pornography and are “intentionally as vague as possible,” to avoid detection, according to the internal documents.

The more that Twitter cracks down on certain keywords, the more that users are nudged to use obfuscated text, which “tend to be harder for (Twitter) to automate against,” the documents said.

Ghost Data’s Stroppa said that such tricks would complicate efforts to hunt down the materials, but noted that his small team of five researchers and no access to Twitter’s internal resources was able to find hundreds of accounts within 20 days.

Twitter did not respond to a request for further comment.


Twitter exec says moving fast on moderation, as harmful content surges

A Twitter logo hangs outside the company's San Francisco offices on Nov. 1, 2022. (AP)
A Twitter logo hangs outside the company's San Francisco offices on Nov. 1, 2022. (AP)
Updated 03 December 2022

Twitter exec says moving fast on moderation, as harmful content surges

A Twitter logo hangs outside the company's San Francisco offices on Nov. 1, 2022. (AP)
  • Twitter is restricting hashtags and search results frequently associated with abuse, like those aimed at looking up “teen” pornography

SAN FRANCISCO: Elon Musk’s Twitter is leaning heavily on automation to moderate content, doing away with certain manual reviews and favoring restrictions on distribution rather than removing certain speech outright, its new head of trust and safety told Reuters.
Twitter is also more aggressively restricting abuse-prone hashtags and search results in areas including child exploitation, regardless of potential impacts on “benign uses” of those terms, said Twitter Vice President of Trust and Safety Product Ella Irwin.
“The biggest thing that’s changed is the team is fully empowered to move fast and be as aggressive as possible,” Irwin said on Thursday, in the first interview a Twitter executive has given since Musk’s acquisition of the social media company in late October.
Her comments come as researchers are reporting a surge in hate speech on the social media service, after Musk announced an amnesty for accounts suspended under the company’s previous leadership that had not broken the law or engaged in “egregious spam.”
The company has faced pointed questions about its ability and willingness to moderate harmful and illegal content since Musk slashed half of Twitter’s staff and issued an ultimatum to work long hours that resulted in the loss of hundreds more employees.
And advertisers, Twitter’s main revenue source, have fled the platform over concerns about brand safety.
On Friday, Musk vowed “significant reinforcement of content moderation and protection of freedom of speech” in a meeting with France President Emmanuel Macron.
Irwin said Musk encouraged the team to worry less about how their actions would affect user growth or revenue, saying safety was the company’s top priority. “He emphasizes that every single day, multiple times a day,” she said.
The approach to safety Irwin described at least in part reflects an acceleration of changes that were already being planned since last year around Twitter’s handling of hateful conduct and other policy violations, according to former employees familiar with that work.
One approach, captured in the industry mantra “freedom of speech, not freedom of reach,” entails leaving up certain tweets that violate the company’s policies but barring them from appearing in places like the home timeline and search.
Twitter has long deployed such “visibility filtering” tools around misinformation and had already incorporated them into its official hateful conduct policy before the Musk acquisition. The approach allows for more freewheeling speech while cutting down on the potential harms associated with viral abusive content.
The number of tweets containing hateful content on Twitter rose sharply in the week before Musk tweeted on Nov. 23 that impressions, or views, of hateful speech were declining, according to the Center for Countering Digital Hate – in one example of researchers pointing to the prevalence of such content, while Musk touts a reduction in visibility.
Tweets containing words that were anti-Black that week were triple the number seen in the month before Musk took over, while tweets containing a gay slur were up 31 percent, the researchers said.
’MORE RISKS, MOVE FAST’
Irwin, who joined the company in June and previously held safety roles at other companies including Amazon.com and Google, pushed back on suggestions that Twitter did not have the resources or willingness to protect the platform.
She said layoffs did not significantly impact full-time employees or contractors working on what the company referred to as its “Health” divisions, including in “critical areas” like child safety and content moderation.
Two sources familiar with the cuts said that more than 50 percent of the Health engineering unit was laid off. Irwin did not immediately respond to a request for comment on the assertion, but previously denied that the Health team was severely impacted by layoffs.
She added that the number of people working on child safety had not changed since the acquisition, and that the product manager for the team was still there. Irwin said Twitter backfilled some positions for people who left the company, though she declined to provide specific figures for the extent of the turnover.
She said Musk was focused on using automation more, arguing that the company had in the past erred on the side of using time- and labor-intensive human reviews of harmful content.
“He’s encouraged the team to take more risks, move fast, get the platform safe,” she said.
On child safety, for instance, Irwin said Twitter had shifted toward automatically taking down tweets reported by trusted figures with a track record of accurately flagging harmful posts.
Carolina Christofoletti, a threat intelligence researcher at TRM Labs who specializes in child sexual abuse material, said she has noticed Twitter recently taking down some content as fast as 30 seconds after she reports it, without acknowledging receipt of her report or confirmation of its decision.
In the interview on Thursday, Irwin said Twitter took down about 44,000 accounts involved in child safety violations, in collaboration with cybersecurity group Ghost Data.
Twitter is also restricting hashtags and search results frequently associated with abuse, like those aimed at looking up “teen” pornography. Past concerns about the impact of such restrictions on permitted uses of the terms were gone, she said.
The use of “trusted reporters” was “something we’ve discussed in the past at Twitter, but there was some hesitancy and frankly just some delay,” said Irwin.
“I think we now have the ability to actually move forward with things like that,” she said.

 


Hate speech on the rise on Twitter despite Elon Musk’s claims

A view of the Twitter logo at its corporate headquarters in San Francisco, California, U.S. November 18, 2022. (REUTERS)
A view of the Twitter logo at its corporate headquarters in San Francisco, California, U.S. November 18, 2022. (REUTERS)
Updated 02 December 2022

Hate speech on the rise on Twitter despite Elon Musk’s claims

A view of the Twitter logo at its corporate headquarters in San Francisco, California, U.S. November 18, 2022. (REUTERS)
  • Data from researchers reveals a sharp increase in racial slurs and other offensive terms on the platform immediately after the billionaire’s takeover of the platform
  • In the 12 days after Musk’s takeover, the Institute for Strategic Dialogue tracked 450 new Twitter accounts linked to Daesh, a 69 percent increase on the previous 12 days

DUBAI: On Nov. 4, just over a week after he completed his takeover of Twitter, billionaire Elon Musk Tweeted that the platform had “seen hateful speech at times this week decline *below* our prior norms, contrary to what you may read in the press.”

However, newly published data from several organizations suggests otherwise.

In the first 12 days following the takeover, the Institute for Strategic Dialogue tracked 450 newly created Twitter accounts linked to Daesh, a 69 percent increase compared with the previous 12 days.

Meanwhile, the Center for Countering Digital Hate said that in the week beginning Oct. 31, the first full week the platform was under the ownership of Musk, one particular racial slur appeared in tweets and retweets 26,228 times, triple the 2022 average for that slur. A derogatory term used to attack another group was mentioned in 33,926 tweets and retweets, a 53 percent increase on the 2022 average.

Musk’s takeover of Twitter has been controversial from the moment he announced it. It came as social media platforms had been under increasing scrutiny for some time over their policies on content moderation and efforts to combat hate speech.

Musk, however, describes himself as a “free speech absolutist” and said he wanted to change the way in which content is moderated on the platform. During a TED Talk in April, the same month he reached his agreement to buy Twitter, he talked about his plans for moderation and suggested he might make Twitter’s algorithm open source.

On Oct. 28, the day after his takeover was completed, he announced his plans to form “a content moderation council with widely diverse viewpoints.”

Embed tweet:

On Nov. 4 he said: “Twitter’s strong commitment to content moderation remains absolutely unchanged.”

Embed tweet:

But CCDH’s analysis revealed that despite early claims by Musk and Twitter’s head of trust and safety at the time, Yoel Roth, that the platform had succeeded in reducing the number of times hate speech was seen on Twitter’s search and trending pages, the actual volume of hateful tweets on the platform increased.

Before Musk bought Twitter, for example, slurs against Black Americans appeared on the platform an average of 1,282 times a day. In the days after, the number increased to 3,876 times a day, The New York Times reported. Antisemitic posts increased by more than 61 percent in the two weeks following Musk’s arrival, it added.

A separate study by the Network Contagion Research Institute found an increase of nearly 500 percent in the use of a derogatory racial term for Black people in the 12 hours immediately following the shift of ownership to Musk.

Embed tweet:

Analysts note that an escalation in hate speech on Twitter is not only dangerous for users and society as a whole, but also represents a threat to the company itself. According to research and information center Media Matters for America, 50 of the platform’s top 100 advertisers have either announced they will no longer advertise on Twitter or have simply stopped.

Collectively, they accounted for nearly $2 billion in advertising revenue on the platform since 2020 and more than $750 million in 2022 alone.

Roth quit the company last month and later said: “I realized that even if I spent all day, every day trying to avert whatever the next disaster was, there were going to be the ones that got through.

Angelo Carusone, president of Media Matters for America, said that Musk’s Twitter is a cacophony of dictatorship, egotism and blatant disregard for the advice of experts.

If it continues, he warned, “under Musk’s leadership, Twitter will become a fever swamp of dangerous conspiracy theories, partisan chicanery and operationalized harassment.”

 

 


Twitter suspends Kanye’s account again on violating rules

Twitter suspends Kanye’s account again on violating rules
Updated 03 December 2022

Twitter suspends Kanye’s account again on violating rules

Twitter suspends Kanye’s account again on violating rules
  • Twitter owner Elon Musk had welcomed the return of the rapper, now known as Ye, to the platform in October

DUBAI: Twitter Inc. on Friday suspended Kanye West’s account again, just two months after it was reinstated, after its owner Elon Musk said he had violated the platform’s rules prohibiting incitement to violence.
Musk, who calls himself a free speech absolutist, had welcomed the return of the rapper, now known as Ye, to the platform in October.
“I tried my best. Despite that, he again violated our rule against incitement to violence. Account will be suspended,” Musk tweeted late on Thursday.
West’s account was suspended within an hour of Musk’s post, made in a reply to a Twitter user who had said “Elon Fix Kanye Please.” Twitter did not immediately respond to a request for comment.
Before suspending Ye’s account, which had over 30 million followers, Twitter had restricted one of his tweets. Reuters could not independently verify the contents of the post.
The social media platform restored the rapper’s account before the completion of its $44 billion takeover by Musk. Musk later clarified that he had had no role in bringing Ye back on Twitter.
Ye on Thursday tweeted a photo of Hollywood mogul Ari Emanuel spraying water at the back of Musk’s head with a hose. He captioned the picture “Let’s always remember this as my final tweet #ye24,” before the account was suspended.
Musk responded that Ye’s account was suspended for incitement to violence, and not for posting “an unflattering pic of me being hosed by Ari.”
In November, Twitter reinstated some controversial accounts that had been banned or suspended, including satirical website Babylon Bee and comedian Kathy Griffin.
Musk also decided to reinstate former US President Donald Trump’s account after a majority of Twitter users voted in favor in a poll to bring back Trump.

 


MBC Group to expand Shahid catalog with hit anime titles

MBC Group to expand Shahid catalog with hit anime titles
Updated 02 December 2022

MBC Group to expand Shahid catalog with hit anime titles

MBC Group to expand Shahid catalog with hit anime titles
  • Group secured rights to various series, including TV Tokyo’s ‘Bleach: Thousand-Year Blood War,’ ‘Bleach’ and ‘One Piece’

LONDON: MBC Group, the Middle East and North Africa region’s leading media company, announced new partnerships on Thursday to expand the number of anime titles available on its streaming platform Shahid.

The Riyadh-based organization said in a statement it had teamed up “with key anime studios and production houses in Japan beyond to bring more anime content to its streaming platform.”

“Anime is extremely popular in the Middle East region — particularly in the Kingdom of Saudi Arabia — so needless to say, we are incredibly excited to be making new additions to our ever-expanding anime catalog on Shahid, bringing new and hit titles that audiences will love exploring,” said Tareq Al-Ibrahim, director of content for subscription video on demand at Shahid.

As part of the new deals, MBC Group said it has secured exclusive rights in MENA to TV Tokyo’s “Bleach: Thousand-Year Blood War,” the 52-episode Japanese anime television series based on the “Bleach” manga series by Tite Kubo, and a direct sequel to the “Bleach” anime series.

The title, which returns after an eight-year hiatus, is available to stream on Shahid at the same time as in Japan and the US.

The group also announced the extension of the partnership with TOEI Animation, the Japanese anime studio behind the 25-year global hit manga series “One Piece.” As part of the renewed collaboration, MBC Group will air the new upcoming episodes of the series exclusively on its platform.

Following the success of the anime adaptation of “Rascal Does Not Dream of Bunny Girl Senpai” on Shahid, the media group has also expanded its partnership with its production company, Aniplex.

Under the new collaboration, fans will enjoy more than 200 hours of Aniplex content on Shahid, including “Fate/Stay Night,” “Sword Art Online,” and “Gurren Lagann.”

The move reinforces MBC Group’s commitment to expanding its anime offering, continuing to add to an already rich catalog that includes renowned titles “Hunter x Hunter,” “Legend of the Galactic Heroes,” “Belle,” as well as the Japanese–Saudi Arabian animated action fantasy film, “The Journey.”

The company said the new titles will be available to stream on Shahid by the end of the year.

The news comes at an exciting moment for the MBC Group. The company was reported last month to be working with HSBC Holdings and JPMorgan Chase & Co. to go public as early as next year.


Social app Parler says sale to Kanye West called off

Social app Parler says sale to Kanye West called off
Updated 02 December 2022

Social app Parler says sale to Kanye West called off

Social app Parler says sale to Kanye West called off
  • Owners said the decision was made “in the interest of both parties in mid-November.”

NEW YORK: Social network Parler announced Thursday that its planned sale to Kanye West has been called off, as the rapper-businessman now known as Ye continues to alienate fans and commercial partners with anti-Semitic comments.
“Parlement Technologies would like to confirm that the company has mutually agreed with Ye to terminate the intent of sale of Parler,” the network — seen as a home for online extremist rhetoric — said in a tweet.
It said the decision was made “in the interest of both parties in mid-November.”
Parler had announced a deal for West to buy the platform popular with conservatives in mid-October — just over a week after the rapper’s Twitter and Instagram accounts were restricted over anti-Semitic posts he made.
But the rapper, who has spoken openly about his struggles with mental illness, has seen his business relationships crumble in recent weeks as his erratic behavior and extreme speech continue to raise concerns.
In perhaps his most provocative outburst to date, West on Thursday declared his “love” of Nazis and admiration for Adolf Hitler during a rambling livestream with conspiracy theorist Alex Jones.
The 45-year-old’s restrictions on Twitter and Instagram last month were not the first time his posts prompted punitive action from major social media platforms.
Earlier this year, West was banned from posting on Instagram for 24 hours after violating the social network’s harassment policy amid his acrimonious divorce from reality star Kim Kardashian.
Launched in 2018, Parler became a haven for Donald Trump supporters and far-right users who say they have been censored on mainstream social media platforms. It has since signed up many more traditional Republican voices.
Parler was temporarily removed from Apple and Google app stores last year for failing to moderate calls for violence after the attack on the US Capitol by supporters of the former president.
It has since been allowed back in the both stores, ostensibly after improving its content moderation systems.