Arms experts and scientists from an elite gathering in the Swiss Alps have warned countries must act quickly to prevent future battlefields having autonomous robots with artificial intelligence roaming the field, killing humans.
The annual World Economic Forum — which included billionaires, scientists, and political leaders — arranged a hour-long panel discussion to consider: “What if robots go to war?”
It featured four speakers: former UN disarmament chief Angela Kane, BAE Systems chair Sir Roger Carr, artificial intelligence (AI) expert Stuart Russell, and robot ethics expert Alan Winfield.
Stuart Russell, a professor of computer science at the University of California, Berkeley, said:
“We have a fairly short time horizon to act, if within the next two years the main parties aren’t drawn into serious negotiations… it may be too late.
“We are not talking about drones, where a human pilot is controlling the drone.
“We are talking about systems that weigh less than an ounce, can fly faster than a person can run, and can blow holes in their heads with one gram of explosive and can be launched in the millions.”
BAE Systems chair Sir Roger Carr said that these types of systems would be able to identify and select a target, then adjust its behavior if necessary, and deploy force [fire] without any human intervention.
With the very real possibility of future wars being fought by autonomous robots lacking any kind of human moral compass, the panel urged global leaders to address the threat soon.
Despite their different backgrounds, all the panelist agreed autonomous weapons systems pose a real danger, and require immediate diplomatic action to negotiate a legally-binding instrument that draws the line at weapons not under human control.
Angela Kane, the German UN High Representative for Disarmament Affairs from 2012-2015, said she felt that the world has been too slow to take pre-emptive measures in protecting humanity from the lethal technology:
“It may be too late, there are many countries and many representatives in the international community that really do not understand what is involved.
“This development is something that is limited to a certain number of advanced countries.”
According to Bloomberg Business: “last year, more than 90 countries met to discuss the challenges raised by autonomous weapons systems, and another meeting is planned for April.”
The panelists all agreed keeping killer robots out of the hands of non-state armed groups would be challenging, if not impossible, they also dismissed the so-called deterrence value.
Both Russell and Carr agree that fully autonomous weapons would not be able to abide by the laws of war.
Ethical and moral concerns were the main topics raised, the most surprising comment coming from arms industry representative Carr, saying that fully autonomous weapons would be “devoid of responsibility,” and would have “no emotion or sense of mercy,” warning:
“If you remove ethics, judgement, and morality from human endeavour whether it is in peace or war, you will take humanity to another level, which is beyond our comprehension.”
According to the Campaign to Stop Killer Robots, a serious group of academics:
“Kane, who is now a fellow with the Vienna-based Center for Disarmament and Non-Proliferation, described how states can use the framework Convention on Conventional Weapons (CCW) to negotiate another protocol on autonomous weapons.
“She criticized the pace of diplomatic deliberations in this forum since 2014 as ‘glacial’ and encouraged the U.S. to ‘take the lead’ diplomatically and ‘elevate the debate’ as France and Germany have being doing on killer robots at the CCW.
“Russell described the next 18 months to two years as critical in achieving a negotiating process. Carr repeatedly said several times that ‘a line needs to be drawn’ not least because others will likely use killer robots irresponsibly.”
Prior to and during the panel talk, a three-question poll on killer robots was given out. The first results found that 88 percent of people in the room and 55 percent outside agreed that if attacked they wanted to be defended by autonomous weapons rather than by human “sons and daughters.”
The panelists dismissed this as naïve, with Winfield saying it revealed the “extraordinary misplaced and misguided confidence in AI systems” because “even a well-designed robot will behave chaotically and make mistakes in a chaotic environment.”
Some 1,000 science and technology chiefs, which included physicist Stephen Hawking, said in an open letter last July that the development of weapons with a degree of autonomous decision-making capacity could be feasible within years, not decades.
They also called for a ban on offensive autonomous weapons, which are beyond any meaningful human control. They warned that the world would be at risk of sliding into an artificial intelligence arms race. They also raised the alarm over the risks of these weapons falling into the hands of violent extremists.