Fall of top US scientists points to ethics gap in research

In this Dec. 6, 2016 file photo, Brian Wansink speaks during an interview in the produce section of a supermarket in Ithaca, N.Y. (AP)
Updated 24 September 2018

Fall of top US scientists points to ethics gap in research

  • Links between a doctor leading a clinical trial and manufacturers of drugs or medical equipment used in the study can influence the methodology and ultimately the results

WASHINGTON: Three prominent US scientists have been pushed to resign over the past 10 days after damning revelations about their methods, a sign of greater vigilance and decreasing tolerance for misconduct within the research community.
The most spectacular fall concerned Jose Baselga, chief medical officer at Memorial Sloan Kettering Cancer Center in New York. He authored hundreds of articles on cancer research.
Investigative journalism group ProPublica and The New York Times revealed on September 8 that Baselga failed to disclose in dozens of research articles that he had received millions of dollars from pharmaceutical and medical companies.
Such declarations are generally required by scientific journals.
Links between a doctor leading a clinical trial and manufacturers of drugs or medical equipment used in the study can influence the methodology and ultimately the results.
But journals don’t themselves verify the thoroughness of an author’s declarations.
Caught up in the scandal, Baselga resigned on September 13.

Next came the case of Brian Wansink, director of the Food and Brand Lab at the prestigious Cornell University.
He made his name thanks to studies that garnered plenty of media attention, including on pizza, and the appetites of children.
His troubles began last year when scientific sleuths discovered anomalies and surprisingly positive results in dozens of his articles.
In February, BuzzFeed published messages in which Wansink encouraged a researcher to extract from her data results more likely to go “viral.”
After a yearlong inquiry, Cornell announced on Thursday that Wansink committed “academic misconduct in his research and scholarship,” describing a litany of problems with his results and methods.
He is set to resign at the end of the academic year, but from now on will no longer teach there.
Wansink denied all fraud, but 13 of his articles have already been withdrawn by journals.
In the final case, Gilbert Welch, a professor of public health at Dartmouth College, resigned last week.
The university accused him of plagiarism in an article published in The New England Journal of Medicine, the most respected American medical journal.

“The good news is that we are finally starting to see a lot of these cases become public,” said Ivan Oransky co-founder of the site Retraction Watch, a project of the Center for Scientific Integrity that keeps tabs on retractions of research articles in thousands of journals.
Oransky told AFP that what has emerged so far is only the tip of the iceberg.
The problem, he said, is that scientists, and supporters of science, have often been unwilling to raise such controversies “because they’re afraid that talking about them will decrease trust in science and that it will aid and abet anti-science forces.”
But silence only encourages bad behavior, he argued. According to Oransky, more transparency will in fact only help the public to better comprehend the scientific process.
“At the end of the day, we need to think about science as a human enterprise, we need to remember that it’s done by humans,” he said. “Let’s remember that humans make mistakes, they cut corners, sometimes worse.”
Attention has long focused on financial conflicts of interest, particularly because of the influence of the pharmaceutical industry.
But the Wansink case illustrates that other forms of conflict, including reputational, are equally important. Academic careers are largely built on how much one publishes and in which journals.
As a result, researchers compete to produce positive, new and clear results — but work that produces negative results or validates previous findings should also be rewarded, argued Brian Nosek, a professor of psychology at the University of Virginia who heads the pro-transparency Center for Open Science.
“Most of the work when we’re at the boundary of science is messy, has exceptions, has things that don’t quite fit,” he explained, while “the bad part of the incentives environment is that the reward system is all about the result.”
While moves toward more transparency have gathered momentum over the past decade, in particular among publishers of research articles, there is still a long way to go, said Nosek.
“Culture change is hard,” he argued, adding: “Universities and medical centers are the slowest actors.”


Russia to send ‘Fedor’ its first humanoid robot into space

Updated 22 August 2019

Russia to send ‘Fedor’ its first humanoid robot into space

  • Fedor was to blast off in a Soyuz rocket at 6:38 am Moscow time (0338 GMT) from Russia’s Baikonur cosmodrome
  • Fedor is not the first robot to go into space

MOSCOW: Russia was set to launch on Thursday an unmanned rocket carrying a life-size humanoid robot that will spend 10 days learning to assist astronauts on the International Space Station.
Named Fedor, for Final Experimental Demonstration Object Research with identification number Skybot F850, the robot is the first ever sent up by Russia.
Fedor was to blast off in a Soyuz rocket at 6:38 am Moscow time from Russia’s Baikonur cosmodrome in Kazakhstan, dock with the space station on Saturday and stay till September 7.
The Soyuz spacecraft is normally manned on such trips, but on Thursday no humans will be traveling in order to test a new emergency rescue system.
Instead of cosmonauts, Fedor will sit in a specially adapted pilot’s seat.

The silvery anthropomorphic robot stands one meter 80 centimeters tall (5 foot 11 inches) and weighs 160 kilograms (353 lbs).
Fedor has Instagram and Twitter accounts that describe it as learning new skills such as opening a bottle of water. In the station, it will trial those manual skills in very low gravity.
“That’s connecting and disconnecting electric cables, using standard items from a screwdriver and a spanner to a fire extinguisher,” the Russian space agency’s director for prospective programs and science, Alexander Bloshenko, said in televised comments.
Fedor copies human movements, a key skill that allows it to remotely help astronauts or even people on Earth carry out tasks while they are strapped into an exoskeleton.
Such robots will eventually carry out dangerous operations such as space walks, Bloshenko told RIA Novosti state news agency.
On the website of one of the state backers of the project, the Foundation of Advanced Research Projects, Fedor is described as potentially useful on Earth for working in high radiation environments, de-mining and tricky rescue missions.
On board, the robot will perform tasks supervised by Russian cosmonaut Alexander Skvortsov, who joined the ISS last month, and will wear an exoskeleton in a series of experiments scheduled for later this month.

Robonaut 2, Kirobo
Space agency chief Dmitry Rogozin showed pictures of the robot to President Vladimir Putin this month, saying it will be “an assistant to the crew.”
“In the future we plan that this machine will also help us conquer deep space,” he added.
Fedor is not the first robot to go into space.
In 2011, NASA sent up Robonaut 2, a humanoid robot developed with General Motors and a similar aim of working in high-risk environments.
It was flown back to Earth in 2018 after experiencing technical problems.
In 2013, Japan sent up a small robot called Kirobo along with the ISS’s first Japanese space commander. Developed with Toyota, it was able to hold conversations — albeit only in Japanese.