Indian-American brothers look to harness artificial intelligence for greater good

Artificial intelligence could be deployed by dictators, criminals and terrorists to manipulate elections and use drones in terrorist attacks, more than two dozen experts said on Wednesday, 21 February 2018. (AFP)
Updated 25 February 2018
0

Indian-American brothers look to harness artificial intelligence for greater good

SAN FRANCISCO: As debate swirls on whether artificial intelligence will be a boon or a curse for humanity, two Indian-American entrepreneur brothers are out to ensure the emerging technologies don’t just benefit the richest in society.
Romesh and Sunil Wadhwani this week launched what is billed as the world’s first nonprofit institute dedicated to putting AI to work improving lives of poor farmers, rural health care workers or teachers in communities with scant resources.
“AI will go where AI will go; it is difficult to predict where,” Sunil Wadhwani said of the conflicting views on the emergence of computers more brilliant than their human creators.
“Our focus is how many tens of millions of lives can we improve in the next five or 10 years. Where AI goes in 100 years, it will go.”
The entrepreneur brothers, who have a series of lucrative startups to their name, have committed $30 million over 10 years to the Wadhwani AI institute, established in Mumbai with the Indian government as a partner.
Areas targeted at the outset will include health care, education, agriculture and urban infrastructure.
The project’s founders hope AI could help nurses in rural areas with diagnoses, advise how to optimize crops, translate text books into various languages as needed or even spot signs students might be on paths to dropping out.
“AI is a game-changing technology,” said Sunil Wadhwani, who is based in Pittsburgh as a trustee for Carnegie Mellon University.
“A lot of developing countries are getting left behind; US and China are leapfrogging ahead.”
Students from New York University and the University of Southern California will travel to Mumbai to collaborate, while the brothers also plan to partner with players in Silicon Valley, where Romesh Wadhwani is based.
The ethical issues raised by AI — from its potential to destroy jobs to the power it could exert over people’s lives — will be front of mind, according to institute chief P. Anandan, a former Microsoft Research director.
“It has the potential to be used badly, or run away on its own,” Anandan said of AI.
“At the end of the day, you are going to manage that by being aware of it from the start and applying it where intentions are good.”

Internet giants have been investing heavily in creating software to help machines think more like people, boosted by super-fast computer processing power and access to mountains of data to analyze.
AI has been put to work in the form of virtual aides, for recognizing people’s friends in photos, fighting “fake news,” stymying the online spread of violent extremist messages and more.
But the rise of artificial intelligence brings mighty new challenges too, and the new initiative coincides with the release of a report by AI scholars warning the technology has the potential to be exploited for nefarious purposes.
“These technologies have many widely beneficial applications,” said the study produced by the Future of Humanity Institute, the nonprofit group OpenAI and others.
“Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously.”
The Electronic Frontier Foundation, which took part in the study, expressed concern that “increasingly sophisticated AI will usher in a world that is strange and different from the one we’re used to, and there are serious risks if this technology is used for the wrong ends.”
High-profile figures who have expressed fears about the potential dangers of AI include tech visionary and innovator Elon Musk.
SpaceX founder and Tesla chief executive Musk in 2015 took part in creating the research organization OpenAI, which aims to develop artificial intelligence that helps rather than hurts people.
Microsoft, Amazon, Apple, Google, Facebook, IBM, and Google-owned British AI firm DeepMind are also members of a nonprofit “Partnership on AI” which seeks to promote the technology’s use “to benefit people and society.”
Sunil Wadhwani has meanwhile promised an “aggressive” timeline at the brothers’ eponymous institute, with testing of potential AI tools starting by the end of this year.


YouTube, under pressure for problem content, takes down 58 mln videos in quarter

Updated 14 December 2018
0

YouTube, under pressure for problem content, takes down 58 mln videos in quarter

  • Google added thousands of moderators this year, expanding to more than 10,000, in hopes of reviewing user reports faster

WASHINGTON: YouTube took down more than 58 million videos and 224 million comments during the third quarter based on violations of its policies, the unit of Alphabet Inc’s Google said on Thursday in an effort to demonstrate progress in suppressing problem content.
Government officials and interest groups in the United States, Europe and Asia have been pressuring YouTube, Facebook Inc. and other social media services to quickly identify and remove extremist and hateful content that critics have said incite violence.
The European Union has proposed online services should face steep fines unless they remove extremist material within one hour of a government order to do so.
An official at India’s Ministry of Home Affairs speaking on the condition of anonymity on Thursday said social media firms had agreed to tackle authorities’ requests to remove objectionable content within 36 hours.
This year, YouTube began issuing quarterly reports about its enforcement efforts.
As with past quarters, most of the removed content was spam, YouTube said.
Automated detection tools help YouTube quickly identify spam, extremist content and nudity. During September, 90 percent of the nearly 10,400 videos removed for violent extremism or 279,600 videos removed for child safety issues received fewer than 10 views, according to YouTube.
But YouTube faces a bigger challenge with material promoting hateful rhetoric and dangerous behavior.
Automated detection technologies for those policies are relatively new and less efficient, so YouTube relies on users to report potentially problematic videos or comments. This means that the content may be viewed widely before being removed.
Google added thousands of moderators this year, expanding to more than 10,000, in hopes of reviewing user reports faster. YouTube declined to comment on growth plans for 2019.
It has described pre-screening every video as unfeasible.
The third-quarter removal data for the first time revealed the number of YouTube accounts Google disabled for either having three policy violations in 90 days or committing what the company found to be an egregious violation, such as uploading child pornography.
YouTube removed about 1.67 million channels and all of the 50.2 million videos that were available from them.
Nearly 80 percent of the channel takedowns related to spam uploads, YouTube said. About 13 percent concerned nudity, and 4.5 percent child safety.
YouTube said users post billions of comments each quarter. It declined to disclose the overall number of accounts that have uploaded videos, but said removals were also a small fraction.
In addition, about 7.8 million videos were removed individually for policy violations, in line with the previous quarter.