LONDON/JEDDAH: Elon Musk’s latest brainchild, a ‘friendly’ humanoid robot, sparked a social media frenzy following the announcement at his Tesla company’s AI day event last week.
It has split fans and critics; those enthused by the technological advancement and those with Hollywood’s dystopian depictions of a world run by robots firmly rooted in their minds.
The latter never ends particularly well for humanity. In an ominous tone, Musk’s attempted assurances that the design would ensure a relatively slow, weak robot which ‘you can run away from’ may not assuage genuine fears of the impact the advancement of AI has on humanity.
YouTube co-founder, Chad Hurley, was skeptical the announcement was anything more than clever marketing by the Tesla CEO. He tweeted: “Hmm, autopilot still doesn’t work… how can we prop up the stock? Robots!”
Hmm, autopilot still doesn’t work… how can we prop up the stock? Robots!
— Chad Hur//ey (@Chad_Hurley) August 20, 2021
The autopilot reference relates to the recent problems of Tesla’s autopilot partially automated driving systems in its cars. The U.S. authorities have begun an investigation covering hundreds of thousands of Tesla vehicles.
Chris Holm, scientist and author, took to Twitter to express his misgivings about the robot named Optimus. “Seems to me if you have to put a ‘it won’t murder you’ disclaimer on the announcement of your next big product, you’re already behind the eight-ball.”
"Tesla says it is building a ‘friendly’ robot that will perform menial tasks, won’t fight back."
Seems to me, if you have to put an "it won't murder you" disclaimer on the announcement of your next big product, you're already behind the eight-ball.https://t.co/sk2VBM7CPs
— Chris Holm (@chrisfholm) August 20, 2021
Musk himself has in the past been vocal in his warnings that the proliferation of AI and its adoption by wider society would be akin to ‘summoning the devil’.
He suggested that the pace at which AI would advance posed a ‘fundamental existential risk’. He is in good company here. The renowned English physicist Stephen Hawking remained fearful to the end that AI could ‘end mankind’ with the new form of life outperforming humans and destroying civilisation.
Dr. Mishaal Al-Harbi, chief operating officer at Riyadh-based Research Products Development, one of the leading robotics companies in the Kingdom and a support agency for developing R&D and commercializing academic research, said society and policy makers need to determine the rules that govern humanoid robots.
Researchers need to find ways to codify these rules into the AI that govern robots’ behavior and interactions with humans, he added.
“In terms of the humanoid, I remembered the famous science fiction writer Isaac Asimov, the Three Laws of Robotics that he introduced in his 1942 short story ‘The Runaround’ and those three rules basically addresses the rules for a robot in order to perform its duties or responsibilities without hurting humans,” Al-Harbi told Arab News.
The first rule is ‘a robot may not injure a human being or inaction, allow a human being to come to harm.’
“A robot must obey orders given to it by a human being except where such orders would conflict with the first law. The last rule is the robot must protect its own existence as long as such protection does not conflict with the first and the second one. This has been discussed in the 1940s but it still gives a basis basically of how much freedom we should give robots and AI,” he said.
The primary aim of Optimus is to eliminate dangerous, repetitive, boring tasks and is intended to be ‘friendly of course’. It will stand at 173cm tall and weigh 57 kilograms.
Musk said the machine would be deliberately weak enough that most humans will be able to overpower it if needed. “You never know,” said Musk at the event.
Al-Harbi explained that in future, Teslabots could conduct tasks that are too dangerous or risky for humans like search and rescues, or working in very hostile environments like mines.
“There are a lot of areas where the robot can do a lot of good, but the challenge - this is something that also requires research and especially in AI - is how can you allow the humanoids or robots to conduct their responsibilities within certain parameters and guidelines to prevent them from causing damage?
“This is a technical issue, but also a philosophical issue that needs to be addressed in probably a separate track in AI research; how to enable this capability, to make sure that the robot does not do harm to others. I don’t see it as a threat, I see it as a challenge, and as long as people are working on this challenge then I believe a lot of good can come out of it,” he added.
While the announcement predictably also triggered a slew of memes across social media, TheVerge.com, a tech news site, went further suggesting the announcement was nothing more than pure theatrics from the flamboyant Tesla chief.
They described the announcement and stage antics of a dancer dressed as a robot as ‘a bizarre and brilliant bit of tomfoolery’ designed to mock Tesla’s critics and generate more publicity for the company.
At the earliest, it will be next year, when Musk said somewhat vaguely that he thought he’d “probably have a prototype,” before we have a better idea whether the entrepreneur was serious about the Teslabot or not.
Even if he wasn’t, others will be working on humanoid robots, and the rules Al-Harbi says we need will be required to apply to them as well.