Facebook, Twitter, YouTube pressed over terror content

Last year Google, Facebook, Twitter and Microsoft banded together to share information on groups and posts related to violent extremism. (AFP)
Updated 17 January 2018
0

Facebook, Twitter, YouTube pressed over terror content

WASHINGTON: Facebook, Twitter, and YouTube were pressed in US Congress on Wednesday over their reliance on artificial intelligence and algorithms to keep their powerful platforms clear of violent extremist posts.
In a Senate Commerce Committee hearing, executives of the world’s top social media companies were praised for their efforts so far to eliminate Daesh, Al-Qaeda and other terrorist content from the Internet.
But critics say that extremist groups continue to get their propaganda out to followers via those platforms, and call for tougher action.
Another concern is that the continued ability to use anonymous accounts, while benefiting pro-democracy activists battling repressive governments, will also continue to empower extremists.
“These platforms have created a new and stunningly effective way for nefarious actors to attack and to harm,” said Senator Ben Nelson.
The current efforts by the companies to remove content and cooperate with each other in doing so are strong but “not enough,” he said.
YouTube is automatically removing 98 percent of videos promoting violent extremism using algorithms, said Public Policy Director Juniper Downs.
But Senator John Thune, Chairman of the Commerce Committee, asked Downs why a video which showed the man who bombed the Manchester Arena in June 2017 how to build his bomb has repeatedly been uploaded to its website every time YouTube deletes it, as recently as this month.
“We are catching re-uploads of this video quickly and removing it as soon as those uploads are detected,” said Downs.
Carlos Monje, director of Public Policy and Philanthropy for Twitter, said that even with all their efforts to fight terror-and-hate-related content, “It is a cat-and-mouse game and we are constantly evolving to face the challenge.”
“Social media companies continue to get beat in part because they rely too heavily on technologists and technical detection to catch bad actors,” said Clint Watts, an expert at the Foreign Policy Research Institute in the use of the Internet by terror groups.
“Artificial intelligence and machine learning will greatly assist in cleaning up nefarious activity, but will for the near future fail to detect that which hasn’t been seen before.”
Last year Google, Facebook, Twitter and Microsoft banded together to share information on groups and posts related to violent extremism, to help keep it off their sites.


Facebook says it was ‘too slow’ to fight hate speech in Myanmar

Updated 16 August 2018
0

Facebook says it was ‘too slow’ to fight hate speech in Myanmar

YANGON: Facebook has been “too slow” to address hate speech in Myanmar and is acting to remedy the problem by hiring more Burmese speakers and investing in technology to identify problematic content, the company said in a statement on Thursday.
The acknowledgement came a day after a Reuters investigation showed why the company has failed to stem a wave of vitriolic posts about the minority Rohingya.
Some 700,000 Rohingya fled their homes last year after an army crackdown that the United States denounced as ethnic cleansing. The Rohingya now live in teeming refugee camps in Bangladesh.
“The ethnic violence in Myanmar is horrific and we have been too slow to prevent misinformation and hate speech on Facebook,” Facebook said.
The Reuters story revealed the social media giant for years dedicated scant resources to combating hate speech in Myanmar, which is a market it dominates and where there have been repeated eruptions of ethnic violence.
In early 2015, for instance, there were only two people at Facebook who could speak Burmese monitoring problematic posts.
In Thursday’s statement, posted online, Facebook said it was using tools to automatically detect hate speech and hiring more Burmese-language speakers to review posts, following up on a pledge made by founder Mark Zuckerberg to US senators in April.
The company said that it had over 60 “Myanmar language experts” in June and plans to have at least 100 by the end of the year.
Reuters found more than 1,000 examples of posts, comments, images and videos denigrating and attacking the Rohingya and other Muslims that were on the social media platform as of last week.
Some of the material, which included pornographic anti-Muslim images, has been up on Facebook for as long as six years.
There are numerous posts that call the Rohingya and other Muslims dogs and rapists, and urge they be exterminated.
Facebook currently doesn’t have a single employee in Myanmar, relying instead on an outsourced, secretive operation in Kuala Lumpur – called Project Honey Badger – to monitor hate speech and other problematic posts, the Reuters investigation showed.
Because Facebook’s systems struggle to interpret Burmese script, the company is heavily dependent on users reporting hate speech in Myanmar.
Researchers and human rights activists say they have been warning Facebook for years about how its platform was being used to spread hatred against the Rohingya and other Muslims in Myanmar.
In its statement on Thursday, Facebook said it had banned a number of Myanmar hate figures and organizations from the platform.