http://www.visiontimes.com/?p=79724

Do We Really Want Terminator-Style Weapons Systems?

It would only be a matter of time before terrorists and non-state-aligned combatants would use this technology to inflict catastrophic damage on civilian populations. (Image: Pixabay/CC0 Public Domain)
It would only be a matter of time before terrorists and non-state-aligned combatants would use this technology to inflict catastrophic damage on civilian populations. (Image: Pixabay/CC0 Public Domain)

That the US military is developing killer robots comes as no surprise to me. But when there are so many scientists warning that they “will leave humans utterly defenseless,” it may send shivers down your spine.

There are currently two programs that we know of that are commissioned by the U.S. Defense Advanced Research Projects Agency (DARPA). What they are trying to create is a drone that even when the handlers have no control over it will continue to track and kill its targets. Which sounds OK, until something goes wrong.

Stop Killer Robots Conference during CCW meeting at the U.N. in Geneva:

In the journal Nature, four leading researchers share their concerns and solutions for reducing societal risks from intelligent machines, titled Robotics: Ethics of artificial intelligence.

Stuart Russell, Professor of Computer Science at the University of California, Berkley, wrote: “The artificial intelligence (AI) and robotics communities face an important ethical decision: whether to support or oppose the development of lethal autonomous weapons systems (LAWS).

“Autonomous weapons systems select and engage targets without human intervention; they become lethal when those targets include humans”.

Defense agency’s “aims to develop teams of autonomous aerial vehicles carrying out “all steps of a strike mission—find, fix, track, target, engage, assess in situations in which enemy signal-jamming makes communication with a human commander impossible.”

The 1949 Geneva Convention requires three criteria: military necessity, discrimination between combatants and non-combatants, and the difference between the value of the military objective and the potential for collateral damage. The Martens Clause that was added in 1977 is also relevant: this bans weapons that will violate “principles of humanity and the dictates of public conscience.”

MilitarySkynet.com News:

There has been a series of meetings on LAWS under the auspices of the Convention on Certain Conventional Weapons (CCW) by The United Nations At Geneva . Russell is considered an AI specialist, and was asked to give expert testimony at the third major meeting under the CCW.

“Several countries pressed for an immediate ban. Germany said that it “will not accept that the decision over life and death is taken solely by an autonomous system.”

Japan stated that it “has no plan to develop robots with humans out of the loop, which may be capable of committing murder,” Russell wrote.

“The United States, the United Kingdom and Israel—the three countries leading the development of LAWS technology—suggested that a treaty is unnecessary because they already have internal weapons review processes that ensure compliance with international law.”

Autonomous weapons systems under international law, Geneva Academy of International Humanitarian Law and Human Rights:

“Almost all states who are party to the CCW agree with the need for ‘meaningful human control’ over the targeting and engagement decisions made by robotic weapons. Unfortunately, the meaning of ‘meaningful’ is still to be determined.

“LAWS could violate fundamental principles of human dignity by allowing machines to choose whom to kill—for example, they might be tasked to eliminate anyone exhibiting ‘threatening behavior.’

“The potential for LAWS technologies to bleed over into peacetime policing functions is evident to human-rights organizations and drone manufacturers.”

In Russell’s view: “The overriding concern should be the probable endpoint of this technological trajectory. The capabilities of autonomous weapons will be limited more by the laws of physics—for example, by constraints on range, speed, and payload—than by any deficiencies in the AI systems that control them.”

Killer robots—drone strikes of the future?

“For instance, as flying robots become smaller, their maneuverability increases, and their ability to be targeted decreases. They have a shorter range, yet they must be large enough to carry a lethal payload, perhaps a 1-gram shaped charge to puncture the human cranium.

“Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenseless, this is not a desirable future.”

Angela Kane, the UN’s high representative for disarmament, said killer robots were just a “small step” away, and called for a worldwide ban. But the Foreign Office has said that while the technology had potentially “terrifying” implications, Britain “reserves the right” to develop it to protect troops, wrote The Telegraph.

There are supporters of AI, however. Dr. Sabine Hauert, a lecturer in robotics at the University of Bristol, said: “My colleagues and I spend dinner parties explaining that we are not evil, but instead have been working for years to develop systems that could help the elderly, improve health care, make jobs safer and more efficient, and allow us to explore space or beneath the ocean,” she said.

Killer robots the subject of a UN debate:

Professor of bioengineering, genetics, medicine, and computer science Russ Altman from Stanford University said: “Artificial intelligence (AI) has astounding potential to accelerate scientific discovery in biology and medicine, and to transform health care. AI systems promise to help make sense of several new types of data: measurements from the ‘omics,’ such as genomics, proteomics, and metabolomics; electronic health records; and digital-sensor monitoring of health signs.”

Dr. Michael Horowitz: The future of autonomous weapons:

Manuela Veloso, a Professor of computer science from Carnegie Mellon University, said: “Humans seamlessly integrate perception, cognition, and action. We use our sensors to assess the state of the world, our brains to think and choose actions to achieve objectives, and our bodies to execute those actions. My research team is trying to build robots that are capable of doing the same—with artificial sensors (cameras, microphones, and scanners), algorithms, and actuators, which control the mechanisms.”

I have no problem with AI being used for non-weapons use.

Using weapons with this technology is only going to hurt everyone in the long run.

It would only be a matter of time before terrorists and non-state-aligned combatants would use this technology to inflict catastrophic damage on civilian populations.

Click here to read more Science stories, LIKE us on Facebook , or follow us on Twitter.

Most People Know That California Is in a Drought, but Not Nestlé!
Aerosols in the Atmosphere Irrational and Irresponsible, So Why Do It?