US, Russian astronauts safe after emergency landing

The Soyuz-FG rocket booster with Soyuz MS-10 space ship carrying a new crew to the International Space Station before an emergency shutdown of its second stage. (AP)
Updated 11 October 2018
0

US, Russian astronauts safe after emergency landing

  • The three-stage Soyuz booster suffered an emergency shutdown of its second stage
  • The launch failure marks an unprecedented mishap for the Russian space program

BAIKONUR, Kazakhstan: Two astronauts from the US and Russia were safe after an emergency landing Thursday in the steppes of Kazakhstan following the failure of a Russian booster rocket carrying them to the International Space Station.
NASA astronaut Nick Hague and Roscosmos’ Alexei Ovchinin lifted off as scheduled at 2:40pm Thursday from the Russia-leased Baikonur cosmodrome in Kazakhstan atop a Soyuz booster rocket. Roscosmos and NASA said the three-stage Soyuz booster suffered an emergency shutdown of its second stage. The capsule jettisoned from the booster and went into a ballistic descent, landing at a sharper than normal angle.
The launch failure marks an unprecedented mishap for the Russian space program, which has been dogged by a string of launch failures and other incidents.
“Thank God, the crew is alive,” Russian President Vladimir Putin’s spokesman Dmitry Peskov told reporters when it became clear that the crew had landed safely.
They were to dock at the orbiting outpost six hours later, but the booster suffered a failure minutes after the launch.
NASA and Russian Roscosmos space agency said the astronauts were in good condition after their capsule landed about 20 kilometers east of the city of Dzhezkazgan in Kazakhstan.
Search and rescue teams were heading to the area to recover the crew. Dzhezkazgan is about 450 kilometers northeast of Baikonur. Spacecraft returning from the ISS normally land in that region.


Google chief trusts AI makers to regulate the technology

Updated 13 December 2018
0

Google chief trusts AI makers to regulate the technology

  • Tech companies building AI should factor in ethics early in the process to make certain artificial intelligence with “agency of its own” doesn’t hurt people, Pichai said
  • Google vowed not to design or deploy AI for use in weapons, surveillance outside of international norms, or in technology aimed at violating human rights

SAN FRANCISCO: Google chief Sundar Pichai said fears about artificial intelligence are valid but that the tech industry is up to the challenge of regulating itself, in an interview published on Wednesday.
Tech companies building AI should factor in ethics early in the process to make certain artificial intelligence with “agency of its own” doesn’t hurt people, Pichai said in an interview with the Washington Post.
“I think tech has to realize it just can’t build it, and then fix it,” Pichai said. “I think that doesn’t work.”
The California-based Internet giant is a leader in the development of AI, competing in the smart software race with titans such as Amazon, Apple, Microsoft, IBM and Facebook.
Pichai said worries about harmful uses of AI are “very legitimate” but that the industry should be trusted to regulate its use.
“Regulating a technology in its early days is hard, but I do think companies should self-regulate,” he said.
“This is why we’ve tried hard to articulate a set of AI principles. We may not have gotten everything right, but we thought it was important to start a conversation.”
Google in June published a set of internal AI principles, the first being that AI should be socially beneficial.
“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai said in a memo posted with the principles.
“As a leader in AI, we feel a deep responsibility to get this right.”
Google vowed not to design or deploy AI for use in weapons, surveillance outside of international norms, or in technology aimed at violating human rights.
The company noted that it would continue to work with the military or governments in areas such as cybersecurity, training, recruitment, health care, and search-and-rescue.
AI is already used to recognize people in photos, filter unwanted content from online platforms, and enable cars to drive themselves.
The increasing capabilities of AI have triggered debate about whether computers that could think for themselves would help cure the world’s ills or turn on humanity as has been depicted in science fiction works.