Facebook bug unblocks unwanted connections for a bit

In this March 29, 2018, file photo, the logo for Facebook appears on screens at the Nasdaq MarketSite in New York's Times Square. (AP)
Updated 03 July 2018
0

Facebook bug unblocks unwanted connections for a bit

  • While someone who was unblocked could not see content shared with friends, they could have seen things posted to a wider audience
  • People affected by the bug will get notifications encouraging them to check their blocked lists

SAN FRANCISCO: Facebook on Monday said it is notifying more than 800,000 users that a software bug temporarily unblocked people at the social network and its Messenger service.
The glitch active between May 29 and June 5 has been fixed, according to Facebook, which has been striving to regain trust in the aftermath of a Cambridge Analytica data privacy scandal.
“We know that the ability to block someone is important,” Facebook chief privacy officer Erin Egan said in a blog post.
“We’d like to apologize and explain what happened.”
Blocking someone on Facebook prevents them from seeing posts in a blocker’s profile; connecting as a friend, or starting Messenger conversations.
Blocking someone also automatically “unfriends” the person.
“There are many reasons why people block another person on Facebook,” Egan said.
“Their relationship may have changed or they may want to take a break from someone posting content they find annoying.”
People are blocked for harsher reasons, such as harassment or bullying, Egan added.
The software bug did not restore any severed friend connections at the social network, but someone who was blocked could have been able to reach out to a blocker on Messenger, according to Facebook.
“While someone who was unblocked could not see content shared with friends, they could have seen things posted to a wider audience,” Egan said of the glitch.
The vast majority of the more than 800,000 people affected by the bug had only one person they had blocked be temporarily unblocked, according to Facebook.
People affected by the bug will get notifications encouraging them to check their blocked lists.
Facebook chief Mark Zuckerberg earlier this year was grilled by the European Parliament and the US Congress about a massive breach of users’ personal data in the Cambridge Analytica scandal.
Facebook admitted that up to 87 million users may have had their data hijacked by British consultancy Cambridge Analytica, which worked for US President Donald Trump during his 2016 campaign.


YouTube, under pressure for problem content, takes down 58 mln videos in quarter

Updated 14 December 2018
0

YouTube, under pressure for problem content, takes down 58 mln videos in quarter

  • Google added thousands of moderators this year, expanding to more than 10,000, in hopes of reviewing user reports faster

WASHINGTON: YouTube took down more than 58 million videos and 224 million comments during the third quarter based on violations of its policies, the unit of Alphabet Inc’s Google said on Thursday in an effort to demonstrate progress in suppressing problem content.
Government officials and interest groups in the United States, Europe and Asia have been pressuring YouTube, Facebook Inc. and other social media services to quickly identify and remove extremist and hateful content that critics have said incite violence.
The European Union has proposed online services should face steep fines unless they remove extremist material within one hour of a government order to do so.
An official at India’s Ministry of Home Affairs speaking on the condition of anonymity on Thursday said social media firms had agreed to tackle authorities’ requests to remove objectionable content within 36 hours.
This year, YouTube began issuing quarterly reports about its enforcement efforts.
As with past quarters, most of the removed content was spam, YouTube said.
Automated detection tools help YouTube quickly identify spam, extremist content and nudity. During September, 90 percent of the nearly 10,400 videos removed for violent extremism or 279,600 videos removed for child safety issues received fewer than 10 views, according to YouTube.
But YouTube faces a bigger challenge with material promoting hateful rhetoric and dangerous behavior.
Automated detection technologies for those policies are relatively new and less efficient, so YouTube relies on users to report potentially problematic videos or comments. This means that the content may be viewed widely before being removed.
Google added thousands of moderators this year, expanding to more than 10,000, in hopes of reviewing user reports faster. YouTube declined to comment on growth plans for 2019.
It has described pre-screening every video as unfeasible.
The third-quarter removal data for the first time revealed the number of YouTube accounts Google disabled for either having three policy violations in 90 days or committing what the company found to be an egregious violation, such as uploading child pornography.
YouTube removed about 1.67 million channels and all of the 50.2 million videos that were available from them.
Nearly 80 percent of the channel takedowns related to spam uploads, YouTube said. About 13 percent concerned nudity, and 4.5 percent child safety.
YouTube said users post billions of comments each quarter. It declined to disclose the overall number of accounts that have uploaded videos, but said removals were also a small fraction.
In addition, about 7.8 million videos were removed individually for policy violations, in line with the previous quarter.