Scientists develop brain scanner in a helmet

A handout photo released on March 21, 2018 via Nature from Wellcome shows a woman with a brain scanner. (AFP PHOTO / NATURE / WELLCOME)
Updated 21 March 2018
0

Scientists develop brain scanner in a helmet

British scientists have developed a lightweight and highly sensitive brain imaging device that can be worn as a helmet, allowing the patient to move about naturally.
Results from tests of the scanner showed that patients were able to stretch, nod and even drink tea or play table tennis while their brain activity was being recorded, millisecond by millisecond, by the magnetoencephalography (MEG) system.
Researchers who developed the device and published their results in the journal Nature said they hoped the new scanner would improve research and treatment for patients who can’t use traditional fixed MEG scanners, such as children with epilepsy, babies, or patients with disorders like Parkinson’s disease.
“This has the potential to revolutionize the brain imaging field, and transform the scientific and clinical questions that can be addressed with human brain imaging,” said Gareth Barnes, a professor at the Wellcome Trust Center for Human Neuroimaging at University College London, who co-led the work.
Current MEG scanners are cumbersome and weigh as much as half a ton, partly because the sensors they use to measure the brain’s magnetic field need to be kept very cold — at minus 269 degrees Celsius, Barnes’ team explained.
They also run into difficulties when patients are unable to stay very still — very young children or patients with movement disorders for example — since even a 5-millimeter movement can mean the images are unusable.
In the helmet scanner, the researchers overcame these problems by using quantum sensors, which are lightweight, work at room temperature and can be placed directly onto scalp — increasing the amount of signal they are able to pick up.
Matt Brookes, who worked with Barnes and built the prototype at Nottingham university, said that as well as overcoming the challenge of some patients being unable to stay still, the wearable scanner offers new possibilities in measuring peoples’ brain function during real world tasks and social interactions.
“This has significant potential for impact on our understanding of not only healthy brain function but also on a range of neurological, neurodegenerative and mental health conditions.”


Google chief trusts AI makers to regulate the technology

Updated 13 December 2018
0

Google chief trusts AI makers to regulate the technology

  • Tech companies building AI should factor in ethics early in the process to make certain artificial intelligence with “agency of its own” doesn’t hurt people, Pichai said
  • Google vowed not to design or deploy AI for use in weapons, surveillance outside of international norms, or in technology aimed at violating human rights

SAN FRANCISCO: Google chief Sundar Pichai said fears about artificial intelligence are valid but that the tech industry is up to the challenge of regulating itself, in an interview published on Wednesday.
Tech companies building AI should factor in ethics early in the process to make certain artificial intelligence with “agency of its own” doesn’t hurt people, Pichai said in an interview with the Washington Post.
“I think tech has to realize it just can’t build it, and then fix it,” Pichai said. “I think that doesn’t work.”
The California-based Internet giant is a leader in the development of AI, competing in the smart software race with titans such as Amazon, Apple, Microsoft, IBM and Facebook.
Pichai said worries about harmful uses of AI are “very legitimate” but that the industry should be trusted to regulate its use.
“Regulating a technology in its early days is hard, but I do think companies should self-regulate,” he said.
“This is why we’ve tried hard to articulate a set of AI principles. We may not have gotten everything right, but we thought it was important to start a conversation.”
Google in June published a set of internal AI principles, the first being that AI should be socially beneficial.
“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai said in a memo posted with the principles.
“As a leader in AI, we feel a deep responsibility to get this right.”
Google vowed not to design or deploy AI for use in weapons, surveillance outside of international norms, or in technology aimed at violating human rights.
The company noted that it would continue to work with the military or governments in areas such as cybersecurity, training, recruitment, health care, and search-and-rescue.
AI is already used to recognize people in photos, filter unwanted content from online platforms, and enable cars to drive themselves.
The increasing capabilities of AI have triggered debate about whether computers that could think for themselves would help cure the world’s ills or turn on humanity as has been depicted in science fiction works.