Sunday, 2 June 2013

Killer Robots

Campaigners against "killer robots" - some of whom lobbied the UK parliament on 23 April - say their use would be morally unacceptable
So-called killer robots are due to be discussed at the UN Human Rights Council, meeting in Geneva.
A report presented to the meeting will call for a moratorium on their use while the ethical questions they raise are debated.
The robots are machines programmed in advance to take out people or targets, which - unlike drones - operate autonomously on the battlefield.
They are being developed by the US, UK and Israel, but have not yet been used.
Supporters say the "lethal autonomous robots", as they are technically known, could save lives, by reducing the number of soldiers on the battlefield.
But human rights groups argue they raise serious moral questions about how we wage war, reports the BBC's Imogen Foulkes in Geneva.
They include: Who takes the final decision to kill? Can a robot really distinguish between a military target and civilians?
Is a ban needed to prevent robots that can decide when to kill? - first broadcast 23 April 2013
If there are serious civilian casualties, they ask, who is to be held responsible? After all, a robot cannot be prosecuted for a war crime.
"The traditional approach is that there is a warrior, and there is a weapon," says Christof Heyns, the UN expert examining their use, "but what we now see is that the weapon becomes the warrior, the weapon takes the decision itself."
The moratorium called for by the UN report is not the complete ban human rights groups want, but it will give time to answer some of those questions, our correspondent says.

The era of drone wars is already upon us. The era of robot wars could be fast approaching.
Already there are unmanned aircraft demonstrators like the arrow-head shaped X-47B that can pretty-well fly a mission by itself with no involvement of a ground-based "pilot".
There are missile systems like the Patriot that can identify and engage targets automatically.
And from here it is not such a jump to a fully-fledged armed robot warrior, a development with huge implications for the way we conduct and even conceive of war-fighting.
On a carpet in a laboratory at the Georgia Institute of Technology in Atlanta, Professor Henrik Christensen's robots are hunting for insurgents. They look like cake-stands on wheels as they scuttle about. Christensen and his team at Georgia Tech are working on a project funded by the defence company BAE systems. Their aim is to create unmanned vehicles programmed to map an enemy hideout, allowing human soldiers to get vital information about a building from a safe distance.
"These robots will basically spread out," says Christensen, "they'll go through the environment and map out what it looks like, so that by the time you have humans entering the building you have a lot of intelligence about what's happening there."
The emphasis in this project is reconnaissance and intelligence gathering. But the scientific literature has raised the possibility of armed robots, programmed to behave like locusts or other insects that will swarm together in clouds as enemy targets appear on the battlefield. Each member of the robotic swarm could carry a small warhead or use its kinetic energy to attack a target.
Henrik Christensen Christensen is developing robots that can survey enemy hideouts
Peter W Singer, an expert in the future of warfare at the Brookings Institution in Washington DC, says that the arrival on the battlefield of the robot warrior raises profound questions.
    The mere thought that human beings would set about creating machines that they can set loose to kill other human being, I find repulsive” Jody Williams Nobel Peace Prize Winner for 1997
"Every so often in history, you get a technology that comes along that's a game changer," he says. "They're things like gunpowder, they're things like the machine gun, the atomic bomb, the computer… and robotics is one of those."
"When we say it can be a game changer", he says, "it means that it affects everything from the tactics that people use on the ground, to the doctrine, how we organise our forces, to bigger questions of politics, law, ethics, when and where we go to war."
Jody Williams, the American who won the Nobel Peace Prize in 1997 for her work leading the campaign to ban anti-personnel landmines, insists that the autonomous systems currently under development will, in due course, be able to unleash lethal force.
Williams stresses that value-free terms such as "autonomous weapons systems" should be abandoned.
"We prefer to call them killer robots," she says, defining them as "weapons that are lethal, weapons that on their own can kill, and there would be no human being involved in the decision-making process. When I first learnt about this," she says, "I was honestly horrified — the mere thought that human beings would set about creating machines that they can set loose to kill other human beings, I find repulsive."
It is an emotive topic.
But Professor Ronald Arkin from the Georgia Institute of Technology takes a different view.
turtle The turtlebot could reconnoitre a battlesite
He has put forward the concept of a weapons system controlled by a so-called "ethical governor".
It would have no human being physically pulling the trigger but would be programmed to comply with the international laws of war and rules of engagement.
"Everyone raises their arms and says, 'Oh, evil robots, oh, killer robots'," but he notes, "we have killer soldiers out there. Atrocities continue and they have continued since the beginning of warfare."
His answer is simple: "We need to put technology to use to address the issues of reducing non-combatant casualties in the battle-space".
He believes that "the judicious application of ethical robotic systems can indeed accomplish that, if we are foolish enough as a nation, as a world, to persist in warfare."
Arkin is no arms lobbyist and he has clearly thought about the issues.
There is also another aspect to this debate that perhaps would be a powerful encouragement to caution. At present, the US is one of the technological leaders in this field, but as Singer says this situation will not last forever.
"The reality is that besides the United States there are 76 countries with military robotics programmes right now," he says.
"This is a rapidly proliferating technology with relatively low barriers to entry.
"You can, for a couple of hundred dollars, purchase a small drone that a couple of years ago was limited to militaries. This can't be a situation that you interpret through an American lens. It's of global concern."
Just as drone technology is spreading fast, making the debates about targeted killings of much wider relevance — so too robotics technology will spread, raising questions about how these weapons may be used or should be controlled.
The prospect of totally autonomous weapons technology - so called "human-out-of-the-loop" systems - is still some way off. But Nobel Prize winner Jody Williams is not waiting for them to arrive.
She plans to launch an international campaign to outlaw further research on robotic weapons, aiming for "a complete prohibition of robots that have the ability to kill".
"If they are allowed to continue to research, develop and ultimately use them, the entire face of warfare will be changed forever in an absolutely terrifying fashion."
Arkin takes a different view of the ethical arguments.
He says that to ban such robots outright, without doing the research to understand whether they can lower non-combatant casualties, is to do "a disservice to those who are, unfortunately, slaughtered in warfare by human soldiers".