Killer robots — not a good idea

Killer robots — not a good idea

Killer robots — not a good idea
Killer robots is a dreadful name, don’t you think? It reminds you of the killing machines in the “Terminator” series and the “Battle Droids” of “Star Wars.” “Lethal Autonomous Weapons Systems” is a much classier name, and the acronym is even better: LAWS. So the international conference that opened at the United Nations Geneva office on Monday is about LAWS.
Don’t think “drones” here. Drones loiter almost silently, high in the air above your picnic, until the operator back in Las Vegas decides that you are plotting a terrorist attack and orders the drone to kill you and your family. But at least there is an operator, a human being in the decision-making loop.
With LAWS, there isn’t. The machine sorts through its algorithms, and decides on its own whether to kill you or not. So you’ll probably be glad to know that there are no operational machines of that sort — yet. But military researchers in various countries are working hard on them, and they probably will exist in 10 or 20 years.
Unless we ban them. That’s what the conference in Geneva is about. It’s a meeting of diplomats, arms control experts and ethics and human rights specialists who, if they agree that this is a real threat, will put it on the agenda of the next November’s annual meeting of the countries that have signed the Convention on Certain Conventional Weapons (CCW). So it’s early days yet, and there’s still a chance to nip this in the bud.
That’s an awkward name, but not nearly as clumsy as the full name: The Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects. But it actually has done some good already, and it may do some more.
Protocol 1 bans “The use of weapons the primary effect of which is to injure by fragments which are not detectable by X-rays in the human body.” Protocol II requires countries that use land mines to make them deactivate automatically after a certain period. Protocol V, added in 1995, prohibits the use of blinding laser weapons.
The world would be a worse place if they did not exist. They do exist, and by and large they are obeyed. But none of these weapons would make a decisive difference in actual battle, whereas they cause or would cause great human misery, so it was easy to ban them.
The problem with killer robots is that they could make a decisive difference in battle. They don’t get tired, they don’t get paralyzed with fear, and if you lose them, so what? It’s just a machine. There’s no person in there. But that’s precisely the problem: There’s no person in there. Do you trust the machine to make decisions about killing people — who’s a soldier and a legitimate target, who’s an innocent civilian — all by itself?
So by all means let’s ban purpose-built killer robots if we can: This is an initiative that deserves our support. So it is also time to start working on international rules governing their behavior.
Isaac Asomov’s Three Laws of Robotics (written in 1942) would be a good point of departure.
One: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Two: A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
Three: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Disclaimer: Views expressed by writers in this section are their own and do not necessarily reflect Arab News' point of view