Facebook trains artificial intelligence to spot suicidal signs

Facebook trains artificial intelligence to spot suicidal signs
In this April 18, 2017, file photo, conference workers speak in front of a demo booth at Facebook's annual F8 developer conference in San Jose, Calif. (AP)
Updated 28 November 2017

Facebook trains artificial intelligence to spot suicidal signs

Facebook trains artificial intelligence to spot suicidal signs

SAN FRANCISCO: Facebook on Monday said stepping up the use of artificial intelligence to identify members of the leading social network who may be thinking of suicide.
Software will look for clues in posts or even in videos being streamed at Facebook Live, then fire off reports to human reviewers and speed up alerts to responders trained to help, according to the social network.
“This approach uses pattern recognition technology to help identify posts and live streams as likely to be expressing thoughts of suicide,” Facebook vice president of product management Guy Rosen said in a blog post.
Signs watched for were said to include texts by people or comments to them, such as someone asking if they are troubled.
Facebook already has tools in place for people to report concerns about friend’s who may be considering self-harm, but the software can speed the process and even detect signs people may overlook.
“There have been terribly tragic events — like suicides, some live-streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner,” Facebook chief executive Mark Zuckerberg said early this year in a post at the social network focused on building global community.
“Artificial intelligence can help provide a better approach.”
Facebook is rolling out the artificial intelligence tool outside the US and planned to make it eventually available everywhere except the European Union, where data usage is restricted by privacy regulations.
Facebook has been collaborating with mental health organizations for about a decade on ways to spot signs users may be suicidal and get them help.


YouTube bans seven Houthi channels 

YouTube bans seven Houthi channels 
Updated 24 January 2021

YouTube bans seven Houthi channels 

YouTube bans seven Houthi channels 
  • It deleted accounts that the group had been using to share its agenda
  • The terrorist-designated organization used the channels and other social media platforms to stream propaganda and encourage violence

LONDON: YouTube permanently deleted seven Houthi accounts on Sunday due to a breach of its policy, less than a week after the US designated the militia as a foreign terrorist organization. 

It deleted accounts that the group had been using to share its agenda, such as its main channel “Ferqat Ansar Allah” and “Al Ealam Al-Harbe,” which translates as the war media.

The terrorist-designated organization used the channels and other social media platforms to stream propaganda and encourage violence.

Many leaders and members within the Houthi movement remain active on social media, such as the group’s current leader Muhammad Ali Al-Houthis, and continue to incite hate and violent speech.

The US designation came into effect last Tuesday, the day before President Donald Trump left office. The Houthis are accused of waging a deadly campaign that has destabilized Yemen and the Middle East.

“The designations are intended to hold Ansar Allah accountable for its terrorist acts, including cross-border attacks threatening civilian populations, infrastructure and commercial shipping,” US Secretary of State Mike Pompeo said earlier this month, using the official name of the Houthi movement.

He added that the designations would not affect the work of relief agencies.