Facebook dithered in curbing divisive user content in India

Facebook dithered in curbing divisive user content in India
Facebook saw India as one of the most ‘at risk countries’ in the world and identified both Hindi and Bengali languages as priorities for ‘automation on violating hostile speech.’ (AFP)
Short Url
Updated 24 October 2021

Facebook dithered in curbing divisive user content in India

Facebook dithered in curbing divisive user content in India
  • Communal and religious tensions in India have a history of boiling over on social media and stoking violence
  • Facebook has become increasingly important in politics, and India is no different

NEW DELHI: Facebook in India has been selective in curbing hate speech, misinformation and inflammatory posts, particularly anti-Muslim content, according to leaked documents obtained by The Associated Press, even as the Internet giant’s own employees cast doubt over the its motivations and interests.
Based on research produced as recently as March of this year to company memos that date back to 2019, internal company documents on India highlight Facebook’s constant struggles in quashing abusive content on its platforms in the world’s biggest democracy and the company’s largest growth market. Communal and religious tensions in India have a history of boiling over on social media and stoking violence.
The files show that Facebook has been aware of the problems for years, raising questions over whether it has done enough to address the issues. Many critics and digital experts say it has failed to do so, especially in cases where members of Prime Minister Narendra Modi’s ruling Bharatiya Janata Party are involved.
Across the world, Facebook has become increasingly important in politics, and India is no different.
Modi has been credited for leveraging the platform to his party’s advantage during elections, and reporting from The Wall Street Journal last year cast doubt over whether Facebook was selectively enforcing its policies on hate speech to avoid blowback from the BJP. Modi and Facebook chairman and CEO Mark Zuckerberg have exuded bonhomie, memorialized by a 2015 image of the two hugging at the Facebook headquarters.
The leaked documents include a trove of internal company reports on hate speech and misinformation in India that in some cases appeared to have been intensified by its own “recommended” feature and algorithms. They also include the company staffers’ concerns over the mishandling of these issues and their discontent over the viral “malcontent” on the platform.
According to the documents, Facebook saw India as one of the most “at risk countries” in the world and identified both Hindi and Bengali languages as priorities for “automation on violating hostile speech.” Yet, Facebook didn’t have enough local language moderators or content-flagging in place to stop misinformation that at times led to real-world violence.
In a statement to the AP, Facebook said it has “invested significantly in technology to find hate speech in various languages, including Hindi and Bengali” which “reduced the amount of hate speech that people see by half” in 2021.
“Hate speech against marginalized groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online,” a company spokesperson said.
This AP story, along with others being published, is based on disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen’s legal counsel. The redacted versions were obtained by a consortium of news organizations, including the AP.
Back in February 2019 and ahead of a general election when concerns of misinformation were running high, a Facebook employee wanted to understand what a new user in India saw on their news feed if all they did was follow pages and groups solely recommended by the platform itself.
The employee created a test user account and kept it live for three weeks, a period during which an extraordinary event shook India — a militant attack in disputed Kashmir had killed over 40 Indian soldiers, bringing the country close to war with rival Pakistan.
In the note, titled “An Indian Test User’s Descent into a Sea of Polarizing, Nationalistic Messages,” the employee whose name is redacted said they were “shocked” by the content flooding the news feed. The person described the content as having “become a near constant barrage of polarizing nationalist content, misinformation, and violence and gore.”
Seemingly benign and innocuous groups recommended by Facebook quickly morphed into something else altogether, where hate speech, unverified rumors and viral content ran rampant.
The recommended groups were inundated with fake news, anti-Pakistan rhetoric and Islamophobic content. Much of the content was extremely graphic.
One included a man holding the bloodied head of another man covered in a Pakistani flag, with an Indian flag partially covering it. Its “Popular Across Facebook” feature showed a slew of unverified content related to the retaliatory Indian strikes into Pakistan after the bombings, including an image of a napalm bomb from a video game clip debunked by one of Facebook’s fact-check partners.
“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the researcher wrote.
The report sparked deep concerns over what such divisive content could lead to in the real world, where local news outlets at the time were reporting on Kashmiris being attacked in the fallout.
“Should we as a company have an extra responsibility for preventing integrity harms that result from recommended content?” the researcher asked in their conclusion.
The memo, circulated with other employees, did not answer that question. But it did expose how the platform’s own algorithms or default settings played a part in producing such objectionable content. The employee noted that there were clear “blind spots,” particularly in “local language content.” They said they hoped these findings would start conversations on how to avoid such “integrity harms,” especially for those who “differ significantly” from the typical US user.
Even though the research was conducted during three weeks that weren’t an average representation, they acknowledged that it did show how such “unmoderated” and problematic content “could totally take over” during “a major crisis event.”
The Facebook spokesperson said the test study “inspired deeper, more rigorous analysis” of its recommendation systems and “contributed to product changes to improve them.”
“Separately, our work on curbing hate speech continues and we have further strengthened our hate classifiers, to include four Indian languages,” the spokesperson said.


Facebook whistleblower says transparency needed to fix social media ills

Haugen said the company should be required to disclose which languages are supported by its tech safety systems. (AFP)
Haugen said the company should be required to disclose which languages are supported by its tech safety systems. (AFP)
Updated 04 December 2021

Facebook whistleblower says transparency needed to fix social media ills

Haugen said the company should be required to disclose which languages are supported by its tech safety systems. (AFP)
  • Facebook whistleblower says the degree to which Facebook is harmful in languages other than English will leave people “even more shocked”

LONDON: A deeper investigation into Facebook’s lack of controls to prevent misinformation and abuse in languages other than English is likely to leave people “even more shocked” about the potential harms caused by the social media firm, whistleblower Frances Haugen told Reuters.
Haugen, a former product manager at Meta Platforms Inc’s Facebook, spoke at the Reuters Next conference on Friday.
She left the company in May with thousands of internal documents which she leaked to the Wall Street Journal. That led to a series of articles in September detailing how the company knew its apps helped spread divisive content and harmed the mental health of some young users.
Facebook also knew it had too few workers with the necessary language skills to identify objectionable posts from users in a number of developing countries, according to the internal documents and Reuters interviews with former employees.
People who use the platform in languages other than English are using a “raw, dangerous version of Facebook,” Haugen said.
Facebook has consistently said it disagrees with Haugen’s characterization of the internal research and that it is proud of the work it has done to stop abuse on the platform.
Haugen said the company should be required to disclose which languages are supported by its tech safety systems, otherwise “Facebook will do ... the bare minimum to minimize PR risk,” she said.
The internal Facebook documents made public by Haugen have also raised fresh concerns about how it may have failed to take actions to prevent the spread of misleading information.
Haugen said the social media company knew it could introduce “strategic friction” to make users slow down before resharing posts, such as requiring users to click a link before they were able to share the content. But she said the company avoided taking such actions in order to preserve profit.
Such measures to prompt users to reconsider sharing certain content could be helpful given that allowing tech platforms or governments to determine what information is true poses many risks, according to Internet and legal experts who spoke during a separate panel at the Reuters Next conference on Friday.
“In regulating speech, you’re handing states the power to manipulate speech for their own purposes,” said David Greene, civil liberties director at the Electronic Frontier Foundation.
The documents made public by Haugen have led to a series of US congressional hearings. Adam Mosseri, head of Meta Platforms’ Instagram app, will testify next week on the app’s effect on young people.
Asked what she would say to Mosseri given the opportunity, Haugen said she would question why the company has not released more of its internal research.
“We have evidence now that Facebook has known for years that it was harming kids,” she said. “How are we supposed to trust you going forward?“


Twitter admits policy ‘errors’ after far-right abuse

Twitter launched new rules Tuesday blocking users from sharing private images of other people without their consent. (File/AFP)
Twitter launched new rules Tuesday blocking users from sharing private images of other people without their consent. (File/AFP)
Updated 04 December 2021

Twitter admits policy ‘errors’ after far-right abuse

Twitter launched new rules Tuesday blocking users from sharing private images of other people without their consent. (File/AFP)
  • Twitter admitted policy errors which say anyone can ask Twitter to take down images of themselves posted without their consent
  • This comes after a screenshot of a far-right call-to-action circulated on Telegram claiming things now “work more in our favor.”

WASHINGTON: Twitter’s new picture permission policy was aimed at combating online abuse, but US activists and researchers said Friday that far-right backers have employed it to protect themselves from scrutiny and to harass opponents.
Even the social network admitted the roll out of the rules, which say anyone can ask Twitter to take down images of themselves posted without their consent, was marred by malicious reports and its teams’ own errors.
It was just the kind of trouble anti-racism advocates worried was coming after the policy was announced this week.
Their concerns were quickly validated, with anti-extremism researcher Kristofer Goldsmith tweeting a screenshot of a far-right call-to-action circulating on Telegram: “Due to the new privacy policy at Twitter, things now unexpectedly work more in our favor.”
“Anyone with a Twitter account should be reporting doxxing posts from the following accounts,” the message said, with a list of dozens of Twitter handles.
Gwen Snyder, an organizer and researcher in Philadelphia, said her account was blocked this week after a report to Twitter about a series of 2019 photos she said showed a local political candidate at a march organized by extreme-right group Proud Boys.
Rather than go through an appeal with Twitter she opted to delete the images and alert others to what was happening.
“Twitter moving to eliminate (my) work from their platform is incredibly dangerous and is going to enable and embolden fascists,” she told AFP.
In announcing the privacy policy on Tuesday, Twitter noted that “sharing personal media, such as images or videos, can potentially violate a person’s privacy, and may lead to emotional or physical harm.”
But the rules don’t apply to “public figures or individuals when media and accompanying Tweets are shared in the public interest or add value to public discourse.”
By Friday, Twitter noted the roll out had been rough: “We became aware of a significant amount of coordinated and malicious reports, and unfortunately, our enforcement teams made several errors.”
“We’ve corrected those errors and are undergoing an internal review to make certain that this policy is used as intended,” the firm added.


However, Los Angeles-based activist and researcher Chad Loder said their account was permanently blocked after reports to Twitter over publicly-recorded images from an anti-vaccine rally and a confrontation outside the home of a former Vice journalist.
“Twitter is saying I must delete my tweets featuring photographs of people at newsworthy public events that did indeed get news coverage, or I will never get my account back,” Loder told AFP, adding it was the third report of their account to Twitter in 48 hours.
“The current mass-reporting actions by the far-right are just the latest salvo in an ongoing, concerted effort to memory-hole evidence of their crimes and misdeeds,” Loder added, using a term popularized by George Orwell’s dystopian novel 1984.
Experts noted that Twitter’s new rules sound like a well-intentioned idea but are incredibly thorny to enforce.
One reason is that the platform has become a key forum for identifying people involved in far-right and hate groups, with Internet sleuths posting their names or other identifying information.
The practice of so-called “doxxing” has cost the targets their jobs, set them up for intense public ridicule and even criminal prosecution, while the activists who post the information have faced threats or harassment themselves.
A major example was the online effort to track down people involved in the violence at the US Capitol, which was stormed in January by Donald Trump supporters seeking to block the certification of President Joe Biden’s victory.
Even the US Federal Bureau of Investigation regularly posts images on its feed of as-yet unnamed people it is seeking in connection with the violence.
“Twitter has given extremists a new weapon to bring harm to those in the greatest need of protection and those shining a light on danger,” said Michael Breen, president and CEO of advocacy group Human Rights First, which called on Twitter to halt the policy.
The new rules, announced just a day after Parag Agrawal took over from co-founder Jack Dorsey as boss, wander into issues that may be beyond the platform’s control.
“It gets complicated fast, but these are issues that are going to be resolved probably in our courts,” said Betsy Page Sigman, a professor emeritus at Georgetown University. “I’m not optimistic about Twitter’s changes.”


Twitter’s design, engineering heads to step down in management rejig

The moves come just days after co-founder Jack Dorsey stepped down as chief executive officer. (File/AFP)
The moves come just days after co-founder Jack Dorsey stepped down as chief executive officer. (File/AFP)
Updated 04 December 2021

Twitter’s design, engineering heads to step down in management rejig

The moves come just days after co-founder Jack Dorsey stepped down as chief executive officer. (File/AFP)
  • Twitter's design and engineering heads will step down from their roles as part of a management restructuring campaign

LONDON: Twitter Inc. said on Friday its engineering head Michael Montano and design chief Dantley Davis would step down from their roles by the end of this month, as part of a broader management restructuring at the social networking site.
The moves come just days after co-founder Jack Dorsey stepped down as chief executive officer and handed over the reins to Chief Technology Officer Parag Agrawal.
Twitter said Agrawal, in his newly assumed role, has decided to reorganize the leadership structure at the company and shift to a general manager model for consumer, revenue and core tech that would oversee all core teams across engineering, product management, design and research.
Product lead Kayvon Beykpour, revenue product lead Bruce Falck and Vice President of Engineering Nick Caldwell will now lead the three units respectively, the company said.
Twitter added Lindsey Iannucci, a senior operations and strategy executive at the company, would be the chief of staff.


BuzzFeed to go public after raising less money than expected

BuzzFeed to go public after raising less money than expected
Updated 04 December 2021

BuzzFeed to go public after raising less money than expected

BuzzFeed to go public after raising less money than expected
  • The digital company also raised $150 million in debt financing as part of the deal

American digital company BuzzFeed, known for its viral content and journalism, will go public on Dec. 6 after it initially raised less money than expected.

In a press release on Friday, BuzzFeed said that it had finalized a merger with 890 5th Avenue Partners, a special purpose acquisition company (SPAC), which aims to raise funds through an initial public offering to acquire an existing company.

Buzzfeed’s shares are expected to start trading on the Nasdaq on Monday under the ticker symbol “BZFD.”

BuzzFeed aimed to be valued on Wall Street at $1.5 billion but it raised just $16 million from the SPAC deal, which was announced in June.

The company initially said that 890 5th Avenue Partners held about $288 million in cash, but the majority of investors ultimately withdrew.

The digital company also raised $150 million in debt financing as part of the deal.

BuzzFeed, created in 2006, first became known for its lists and topical quizzes, before broadening its offerings with a Pulitzer Prize-winning news division, YouTube channel and podcasts.

In November 2020, the platform, headquartered in New York, bought the Huffington Post news site from Verizon without disclosing the amount.

Buzzfeed’s public listing comes just days after employees at the news arm staged a 24-hour walkout protesting the company’s failure to offer certain contract conditions, including a salary base of $50,000, after nearly two years of negotiations.

“BuzzFeed won’t budge on critical issues like wages — all while preparing to go public and make executives even richer,” the union said on Twitter.

As part of the SPAC deal, BuzzFeed has also acquired Complex Networks, a media company jointly run by Verizon and Hearst.


Young Arabs’ heavy reliance on social media seen as a double-edged sword 

Young Arabs’ heavy reliance on social media seen as a double-edged sword 
Updated 04 December 2021

Young Arabs’ heavy reliance on social media seen as a double-edged sword 

Young Arabs’ heavy reliance on social media seen as a double-edged sword 
  • From social change to extremism, the Arab obsession with social media is regarded as a Catch-22
  • Arab experts weigh the pros and cons of young people’s massive dependence on social media

DUBAI: Social media is no longer a mere secondary method of communication. In recent years, it has become a powerful tool that can influence public opinion and educate and influence the youth — facets demonstrated over the last decade by the impact of networks on major political and social events in the Middle East.

In the early years of the Arab Spring, even before Instagram was as widespread as it is today, activists resorted to Facebook and Twitter to amplify their demands.

During the Beirut blast of Aug. 4, 2020, Lebanese at home and abroad resorted to social media to depict the aftermath of destruction and cry to the world for help, as well as to mobilize their community at home and abroad to assist those in need.

One could argue that the violence that took place in Palestine, the Gaza Strip and Israel in May gained more visibility internationally due to social media. The pleas were heard, the violence was seen and even experienced vicariously thanks to widespread sharing on social networks.

During such events, critical and verified information was shared just as much as news that misinformed and relayed falsified data — the double-edged sword of social networks.

Global social media dependency has continued to rise in recent years, particularly during the coronavirus disease pandemic. According to Hootsuite’s July 2020 report on Global Digital Growth, since COVID-19 there has been a 10 percent increase in digital adoption compared with 12 months earlier. Almost 51 percent of the global population currently uses social media, with a rate of 1 million new users per day, according to Simon Kemp. 

As for the Arab world, the 2021 Arab Barometer report on the digital divide in the region confirmed an increase in internet usage for all countries in the Middle East and North Africa during the pandemic, which Daniella Raz argues in The Arab World’s Digital Divide has fostered “a digital divide that is affected by the economic status of the country and education level of its citizens.”

According to the Arab Youth Survey 2021, 61 percent of Arab youth use social media as a news source, compared with 34 percent who consume news online and 9 percent through newspapers — making social media the number one source of news for young people.

The MENA region’s youth population is increasingly dependent on social media platforms to access information, particularly video and visually driven social networks, says Fares Akkad, director of media partnerships for news in growth markets across Asia Pacific, Latin America, Middle East Africa, and Turkey at Meta.

A man wearing a facemask as a preventative measure against the COVID-19 virus rides a bicycle in front of a mural. (AFP/File Photo)

“This is a trend that has raised its bar overtime and has been boosted especially during the pandemic and it is likely to grow at a larger and faster pace,” he tells Arab News.

“We have seen the strength and scale of the digital world, giving a platform and voice to millions who may otherwise not have it, providing an open and accessible venue through which regular people—can connect, access a plethora of information from politics to lifestyle and fashion.”

During COVID-19 there was a noticeable shift in how the Arab public retrieves information, from traditional media to new media, particularly social media. This led many Arab governments to redefine how they use networking platforms as ways to communicate critical information with their populations.

The World Health Organization also launched its official pages on social media platforms, including WhatsApp — an action that acknowledged how, during the pandemic, social media became a primary source through which official information and data was disseminated.

However, the same Arab Youth Survey conducted in 2019 showed how 80 percent of Arab youth use social media as a source of information, compared with online (61 percent) and newspapers (27 percent).

The drop in using social media as a news source — from 80 percent in 2019 and 79 percent in 2020 to 61 percent in 2021 — highlights the rise in hesitation from using these platforms to get information.

“From most of the surveys I have done it is shown clearly that much of the younger generation today is relying on social media for news,” Jad Melki, associate professor and journalism and media studies director at the Institute of Media Research and Training at the Lebanese American University, told Arab News.

“A lot of the youth don’t follow news to start with — they are more interested in entertainment than following news.”

Reluctance to use the platforms stems from negative attributes — as critical information is shared to the public for the greater good, so too are false rumors and misinformation, which have contributed to a rise in fear and panic among among people. This is true particularly among the youth — many of whom do not yet have the experience to fact-check information or turn to other sources.

A case in point is Facebook whistle-blower Frances Haugen’s testimony before the US Congress in October, where she stated that Facebook’s products “harm children, stoke division and weaken our democracy.” She claimed the company should declare “moral bankruptcy” if it is to move forward.

Haugen also accused the company of sowing divisions and fueling ethnic violence, placing — as she said in Washington — “astronomical profits before people.”

A woman looks at the Instagram page of Saudi influencer Ragda Bakhorji, in Dubai on April 7, 2020. (AFP/File Photo)

Haugen came forward as the source of a series of revelations in the Wall Street Journal based on internal Facebook (now Meta) documents that revealed the company knew how harmful Instagram was to teenagers’ mental health, and how changes to Facebook’s News Feed feature had also made the platform more divisive among young people.

Haugen’s testimony suggests social media is no longer a secondary method of communication, but a powerful tool that influences public opinion, and there are positives and negatives in its use.

It can educate just as much as it can misinform; bring people and cultures together as well as fuel terrorism and extremism. In many cases, social media is also overtaking mainstream media outlets as the preferred method of choice for how to obtain information.

Akkad affirms that Meta’s house of apps has prioritized making sure “everyone can access credible and accurate information.” He says Meta  removes false claims about vaccines, conspiracy theories, and misinformation that could lead to physical harm.

Currently, Akkad says, Meta removes content that violates its community standards, including more than 20 million pieces of false COVID-19 and vaccine content.

The platform has built a global network of over 80 independent fact-checking partners who rate the accuracy of posts covering more than 60 languages across its apps, with its partners in the Arab region including AFP, Reuters and Fatabyyano. 

It has also displayed warnings on more than 190 million pieces of COVID-related content on Facebook that Meta’s fact-checking partners rated as false, partly false, altered, or missing context. 

Jordanian make-up artist Alaa Bliha, 27, speaks to a journalist in the basement apartment where she lives with her mother and young brother in the capital Amman, on February 2, 2021. (AFP/File Photo)

On the positive side, Meta has helped, says Akkad, over 2 billion people find credible COVID-19 information through its COVID-19 Information Center and News Feed pop-ups.

Yet is this enough to diminish the spread of false information?

Arpi Berberian, a social media manager at Create Media Group in Dubai, believes that to protect Arab youth, or any people at that, social media must be regulated.

While it is the primary source for young people in terms of receiving and processing news, “it should also be up to the receiver to fact check and source check what they read online. Especially when it comes to political news,” she told Arab News. 

“It is hard to generalize across Arab countries given the different political systems, educational levels and cultures,” said Melki.

“Lebanon, Syria, Palestine, Jordan and Iraq or what we call Western Asia, has been the most in turmoil outside Yemen and Libya, and part of that turmoil is related to social media habits and obtaining information.”

Melki says that you can see, as the youth get older and the generation shifts, they become more and more interested in politics and following news. Moreover, as Melki points out, traditional news is now circulating largely online and through social media.

“However, a significant majority still watches television — TV remains king across all demographics, particularly when there is a conflict,” he says.

“We did a survey during the Lebanese protests in 2019 and television was the number one way to receive news followed by social media.”

A picture taken on February 4, 2013 in Riyadh shows a Saudi woman using a tablet computer. (AFP/File Photo)

Melki added that the survey found the same regarding Syrian refugees whether inside or outside of camps — television is the number one way to receive the news.

Can social media dependency in the Arab world be reversed and does it need to be?

“I don’t think it can be reversed. It can be improved though,” says Berberian. “There needs to be guidelines imposed by governments on social media outlets, especially on major outlets that have millions of users of all ages.

“It also doesn’t seem to be a good idea to allow some of the major social media platforms to be run by one entity without any balance. Accountability and the safety of its users needs to be at the forefront of social media’s outlets.”

If social media dependency cannot be reduced in the Arab world, and it has become, as analysts state, one, if not the, primary way for the youth and the general populace to receive critical information, then the way forward is for regulation and education. But then who is to regulate and educate and by what terms?

Especially in nations that lack opportunities for youth available elsewhere, social media becomes a window to the world and one with endless social and business possibilities, and this is the double-edged sword of social media: Its pros and cons can almost be equally weighed.