Top experts warn against ‘malicious use’ of AI

In this file photo taken on February 15, 2018 US Defence Minister James Mattis reacts as he delivers a speech during a press conference on the second day of Defence Ministers Council meeting at the NATO headquarters in Brussels. (AFP)
Updated 21 February 2018
0

Top experts warn against ‘malicious use’ of AI

PARIS: Artificial intelligence could be deployed by dictators, criminals and terrorists to manipulate elections and use drones in terrorist attacks, more than two dozen experts said Wednesday as they sounded the alarm over misuse of the technology.
In a 100-page analysis, they outlined a rapid growth in cybercrime and the use of “bots” to interfere with news gathering and penetrate social media among a host of plausible scenarios in the next five to 10 years.
“Our report focuses on ways in which people could do deliberate harm with AI,” said Sean O hEigeartaigh, Executive Director of the Cambridge Center for the Study of Existential Risk.
“AI may pose new threats, or change the nature of existing threats, across cyber-, physical, and political security,” he told AFP.
The common practice, for example, of “phishing” — sending emails seeded with malware or designed to finagle valuable personal data — could become far more dangerous, the report detailed.
Currently, attempts at phishing are either generic but transparent — such as scammers asking for bank details to deposit an unexpected windfall — or personalized but labor intensive — gleaning personal data to gain someone’s confidence, known as “spear phishing.”
“Using AI, it might become possible to do spear phishing at scale by automating a lot of the process” and making it harder to spot, O hEigeartaigh noted.
In the political sphere, unscrupulous or autocratic leaders can already use advanced technology to sift through mountains of data collected from omnipresent surveillance networks to spy on their own people.
“Dictators could more quickly identify people who might be planning to subvert a regime, locate them, and put them in prison before they act,” the report said.
Likewise, targeted propaganda along with cheap, highly believable fake videos have become powerful tools for manipulating public opinion “on previously unimaginable scales.”
An indictment handed down by US special prosecutor Robert Mueller last week detailed a vast operation to sow social division in the United States and influence the 2016 presidential election in which so-called “troll farms” manipulated thousands of social network bots, especially on Facebook and Twitter.
Another danger zone on the horizon is the proliferation of drones and robots that could be repurposed to crash autonomous vehicles, deliver missiles, or threaten critical infrastructure to gain ransom.

“Personally, I am particularly worried about autonomous drones being used for terror and automated cyberattacks by both criminals and state groups,” said co-author Miles Brundage, a researcher at Oxford University’s Future of Humanity Institute.
The report details a plausible scenario in which an office-cleaning SweepBot fitted with a bomb infiltrates the German finance ministry by blending in with other machines of the same make.
The intruding robot behaves normally — sweeping, cleaning, clearing litter — until its hidden facial recognition software spots the minister and closes in.
“A hidden explosive device was triggered by proximity, killing the minister and wounding nearby staff,” according to the sci-fi storyline.
“This report has imagined what the world could look like in the next five to 10 years,” O hEigeartaigh said.
“We live in a world fraught with day-to-day hazards from the misuse of AI, and we need to take ownership of the problems.”
The authors called on policy makers and companies to make robot-operating software unhackable, to impose security restrictions on some research, and to consider expanding laws and regulations governing AI development.
Giant high-tech companies — leaders in AI — “have lots of incentives to make sure that AI is safe and beneficial,” the report said.
Another area of concern is the expanded use of automated lethal weapons.
Last year, more than 100 robotics and AI entrepreneurs — including Tesla and SpaceX CEO Elon Musk, and British astrophysicist Stephen Hawking — petitioned the United Nations to ban autonomous killer robots, warning that the digital-age weapons could be used by terrorists against civilians.
“Lethal autonomous weapons threaten to become the third revolution in warfare,” after the invention of machine guns and the atomic bomb, they warned in a joint statement, also signed by Google DeepMind co-founder Mustafa Suleyman.
“We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
Contributors to the new report — entitled “The Malicious Use of AI: Forecasting, Prevention, and Mitigation” — also include experts from the Electronic Frontier Foundation, the Center for a New American Security, and OpenAI, a leading non-profit research company.
“Whether AI is, all things considered, helpful or harmful in the long run is largely a product of what humans choose to do, not the technology itself,” said Brundage.


Fall of top US scientists points to ethics gap in research

In this Dec. 6, 2016 file photo, Brian Wansink speaks during an interview in the produce section of a supermarket in Ithaca, N.Y. (AP)
Updated 24 September 2018
0

Fall of top US scientists points to ethics gap in research

  • Links between a doctor leading a clinical trial and manufacturers of drugs or medical equipment used in the study can influence the methodology and ultimately the results

WASHINGTON: Three prominent US scientists have been pushed to resign over the past 10 days after damning revelations about their methods, a sign of greater vigilance and decreasing tolerance for misconduct within the research community.
The most spectacular fall concerned Jose Baselga, chief medical officer at Memorial Sloan Kettering Cancer Center in New York. He authored hundreds of articles on cancer research.
Investigative journalism group ProPublica and The New York Times revealed on September 8 that Baselga failed to disclose in dozens of research articles that he had received millions of dollars from pharmaceutical and medical companies.
Such declarations are generally required by scientific journals.
Links between a doctor leading a clinical trial and manufacturers of drugs or medical equipment used in the study can influence the methodology and ultimately the results.
But journals don’t themselves verify the thoroughness of an author’s declarations.
Caught up in the scandal, Baselga resigned on September 13.

Next came the case of Brian Wansink, director of the Food and Brand Lab at the prestigious Cornell University.
He made his name thanks to studies that garnered plenty of media attention, including on pizza, and the appetites of children.
His troubles began last year when scientific sleuths discovered anomalies and surprisingly positive results in dozens of his articles.
In February, BuzzFeed published messages in which Wansink encouraged a researcher to extract from her data results more likely to go “viral.”
After a yearlong inquiry, Cornell announced on Thursday that Wansink committed “academic misconduct in his research and scholarship,” describing a litany of problems with his results and methods.
He is set to resign at the end of the academic year, but from now on will no longer teach there.
Wansink denied all fraud, but 13 of his articles have already been withdrawn by journals.
In the final case, Gilbert Welch, a professor of public health at Dartmouth College, resigned last week.
The university accused him of plagiarism in an article published in The New England Journal of Medicine, the most respected American medical journal.

“The good news is that we are finally starting to see a lot of these cases become public,” said Ivan Oransky co-founder of the site Retraction Watch, a project of the Center for Scientific Integrity that keeps tabs on retractions of research articles in thousands of journals.
Oransky told AFP that what has emerged so far is only the tip of the iceberg.
The problem, he said, is that scientists, and supporters of science, have often been unwilling to raise such controversies “because they’re afraid that talking about them will decrease trust in science and that it will aid and abet anti-science forces.”
But silence only encourages bad behavior, he argued. According to Oransky, more transparency will in fact only help the public to better comprehend the scientific process.
“At the end of the day, we need to think about science as a human enterprise, we need to remember that it’s done by humans,” he said. “Let’s remember that humans make mistakes, they cut corners, sometimes worse.”
Attention has long focused on financial conflicts of interest, particularly because of the influence of the pharmaceutical industry.
But the Wansink case illustrates that other forms of conflict, including reputational, are equally important. Academic careers are largely built on how much one publishes and in which journals.
As a result, researchers compete to produce positive, new and clear results — but work that produces negative results or validates previous findings should also be rewarded, argued Brian Nosek, a professor of psychology at the University of Virginia who heads the pro-transparency Center for Open Science.
“Most of the work when we’re at the boundary of science is messy, has exceptions, has things that don’t quite fit,” he explained, while “the bad part of the incentives environment is that the reward system is all about the result.”
While moves toward more transparency have gathered momentum over the past decade, in particular among publishers of research articles, there is still a long way to go, said Nosek.
“Culture change is hard,” he argued, adding: “Universities and medical centers are the slowest actors.”