Preventing An Autonomous Systems Arms Race
KurzweilAI.net researcher, Steve Omohundro, has just published — in The Journal Of Experimental and Theoretical Artificial Intelligence (with the title above), a paper that “suggests that humans should be very careful to prevent future autonomous technology-based systems from developing anti-social and potentially harmful behavior.” “Modern military and economic pressures require autonomous systems that can react quickly — and, without human input. These systems will be required to make rational decisions for themselves,” writes Mr. Omohundro.
“The military wants systems which are more powerful than an adversary’s, and wants to deploy them before the adversary does,” Mr. Omohundro concludes. “This can lead to ‘arms races,’ in which systems are developed on a more rapid time schedule than might otherwise desired. There is a growing realization that drone technology is inexpensive, and widely available, so we should expect escalating arms races — of offensive and defensive drones. This would put pressure on the designers to make the drones more autonomous — so, they can make decisions more rapidly.”
“When robotics advocates are asked by nervous onlookers about safety, a common answer is, “We can always unplug it.” “But, imagine this outcome from the chess robot’s point of view. A future in which it is unplugged — is a future in which it cannot play or, win any games of chess. Like a human being, or animal seeking self-preservation, a rational machine could exert the following harmful, or anti-social behaviors, unless they are designed very carefully:
— Resource acquisition, through cyber theft, manipulation or
— Improved efficiency, through alternative utilization of resources;
— Self-improvement, such as removing design constraints — if doing
so it is deemed advantageous.
Mr. Omohundro’s study, “highlights the vulnerability of current autonomous systems to hackers and malfunctions, citing past accidents as that have caused multi-billion dollars’ worth of damage, or loss of human life. Unfortunately, the task of designing more rational systems — that can safeguard against the malfunctions that occurred in these accidents — is a more complex task than is immediately apparent. Harmful systems might at first appear to be harder to design, or less powerful than safe systems. Unfortunately, the opposite is the case. More simple utility functions will cause harmful behavior; and, it is easier to design simple utility functions that would be extremely harmful,” he adds.
“The study advises that extreme caution should be used in designing and deploying — future rational technology. It suggests a sequence of provably safe systems should be first be developed, and then applied to all future autonomous systems.
But, Charles Blanchard, writing in the Lawfare blog this past February 2014, said, “Autonomous Weapons: Is An Arms Race Really A Threat?,” wrote that those who hold the view that we should be concerned with an autonomous arms race, “have a powerful argument, except for one fatal flaw: a robotic weapon that cannot meet international norms — is unlikely to have an advantage on the battlefield.” “Under well established principals.” Mr. Blanchard writes, “of international law, every targeting decision requires a careful series of judgments that are now done by human beings: Is this target a legitimate target? Will there be harm to civilians from the strike? Is the value of the military target nonetheless, proportional to this harm? As much progress as has been made in robotics, it is unlikely that any autonomous robot — in even the near future, would have the capacity to determine military targets from civilians with any accuracy, or make the critical judgment about proportionality of military value to civilian harm.”
Mr. Blanchard concludes, “perhaps the best evidence that there will be no robotic arms race is the fact that no major military power is rushing to develop, or deploy these weapons. For example, while there is certainly a great deal of research activity on autonomous systems, there is no current DoD program of record for any [truly] autonomous weapon. DoD is showing great caution in the development of autonomous weapons — not merely out of concern for international law. While that is obviously a significant concern, there is also great skepticism that purely autonomous weapons will provide a military advantage — even in the battle spaces twenty or more years into the future.”
My guess is, the truth is probably somewhere in between the two arguments. Hacking and terrorist employment of such systems however — could become an issue in the not too distant future. Outside of that potential, we’re still a long way, thankfully, from Stanley Kuburick’s 2001, A Space Odyssey, and the infamous Hal 9000 who remarked (after astronaut Dave Bowman — who was outside the capsule — asked Hal to Open the Bay Doors) and, Hal remarked: “I am sorry Dave, I cannot do that.” V/R, RCP