Machine Gunners

Dunedin robotics enthusiast and programming consultant Paul Campbell is firmly opposed to the...
Dunedin robotics enthusiast and programming consultant Paul Campbell is firmly opposed to the development of killer robot technology. Photo by Craig Baxter.
Unexpected tension exists in New Zealand over killer robots. Kiwis are at the forefront of artificial intelligence, they lead the global anti-killer robot campaign and they are being accused of dragging the chain on the issue, all at the same time, writes Bruce Munro.

Killer robots are on the global priority ''to-do'' list.

For some, that means full steam ahead on developing war machines capable of making decisions and carrying out lethal missions without human direction.

For others, it means an all-out effort to stop the technology before what they predict could be a science fiction horror film writ large and real.

The catalyst for killer robot mania was an open letter published a fortnight ago, calling for ''a ban on offensive autonomous weapons beyond meaningful human control''.

The letter has been signed by science and technology luminaries including physicist Stephen Hawking, Apple Inc co-founder Steve Wozniak, author of the gold

Associate Professor Charles Pigden, of the University of Otago's Philosophy Department, doubts...
Associate Professor Charles Pigden, of the University of Otago's Philosophy Department, doubts robots smart enough to make their own decisions would always obey science fiction writer Isaac Asimov's Three Laws of Robotics. Photo by Peter McIntosh.
standard text on artificial intelligence Stuart Russell, SpaceX and Tesla chief executive Elon Musk, Association for the Advancement of Artificial Intelligence past-president Nils Nilsson and linguist and activist Noam Chomsky.

So, what could our small corner of the globe possibly contribute to the debate?

Quite a bit, it turns out.

The surprising discovery begins with a phone call to local robotics enthusiast Paul Campbell.

He and friends connected with the Dunedin Makerspace shared workshop build robot kitsets, enabling schools that cannot not afford $1000 commercial sets to participate in the annual RoboCup competition.

The call offers the promise of some local colour.

But Mr Campbell has connections and opinions with more heft than expected.

He is an electronics and programming consultant, mostly for United States-based companies, who is well aware of the killer robot debate.

New Zealander Mary Wareham, who is a US-based Human Rights Watch director, is co-ordinator for the global Campaign to Stop Killer Robots. Photo: supplied
New Zealander Mary Wareham, who is a US-based Human Rights Watch director, is co-ordinator for the global Campaign to Stop Killer Robots. Photo: supplied

''They are definitely not a good idea,'' Mr Campbell says.

''From a superpower point of view, they make war politically cheaper. But for a country being overrun they have no control, it is just death from the skies.

''Post-Snowden, we know enough now about what is going on to know we need to worry about our governments in general.''

In arguing against killer robot technology, Mr Campbell has friends in high places.

It turns out that the linchpin of international opposition to battlefield robots is fellow New Zealander Mary Wareham.

Ms Wareham worked as a parliamentary researcher in Wellington before shifting to the US, where she assisted with the Nobel Peace Prize-winning International Campaign to Ban Landmines.

She is the Washington-based advocacy director of Human Rights Watch's arms division as well as global co-ordinator of the Campaign to Stop Killer Robots.

Autonomous weapons present the global community with the ''biggest moral question of our generation'', Ms Wareham says.

''Should humans give the power to select and attack a target over to a machine?'' Her answer is an emphatic ''no''. And she has been rallying growing international support.

Associate Professor Mark Sagar, of Auckland University, says his ground-breaking work in  artificial intelligence is ''the exact opposite'' of ''dangerous'' killer robots. Photo: supplied
Associate Professor Mark Sagar, of Auckland University, says his ground-breaking work in artificial intelligence is ''the exact opposite'' of ''dangerous'' killer robots. Photo: supplied

Searching for other New Zealanders to give informed comment, Associate Professor Mark Sagar came strongly into focus.

The professor is at the cutting-edge of global artificial intelligence development.

A two-time Academy Award winner for his work with Weta Digital, he is director of Auckland University's Laboratory for Animate Technologies.

Prof Sagar wants to create a living brain, online. His progress is astonishing.

He and his team have built a computerised digital baby's head which, in jargonese, is ''autonomously animated in real time through biologically based neural network models''.

Translated, it means the baby's movements, facial expressions and the sounds it makes are driven by computer programs based on human brain processes.

BabyX, as she is called, interacts with her environment. She can see, follow objects with her eyes, react to viewers, learn from her interactions and create her own expressions and emotions.

The most recent iteration, BabyX v3.0, is learning to read.

At its heart, the project is an attempt to probe what makes us human by creating an online human.

''It is designed to explore and connect and illuminate the fundamental connectivity of things we hold dear: emotion, memory, experience, embodiment, free will, etc,'' Prof Sagar says.

Killer robots are ''a stupid and dangerous idea'', he states emphatically.

''What we are doing with BabyX is the exact opposite of killer robots. I have called it anti-robotics. We are trying to add the humanity that is missing from current approaches to AI and human-computer interaction.''

Associate Professor Colin Gavaghan is the director of the Centre for Emerging Technologies, University of Otago. Photo: supplied
Associate Professor Colin Gavaghan is the director of the Centre for Emerging Technologies, University of Otago. Photo: supplied

Could his autonomous AI ''go rogue''? What constraints are needed to keep AI robots safe? But Prof Sagar is overseas attending more international AI gatherings and will be unavailable for several weeks.

Helping tackle those questions from a different angle is Dunedin philosopher Charles Pigden.

Associate Professor Pigden, of the University of Otago's Philosophy Department, believes free will is the key to realising the potential of AI, for good and bad.

In principle, human soldiers can live up to a set of rules for war, a Just War Convention, Prof Pigden says.

They can do that because, although it is difficult, they can make the complex moral distinctions between friend and foe, innocent and guilty, those who are legitimate threats and those who are not.

''If you train them right and you have the right culture, it is possible for soldiers to fight according to these rules,'' he says.

For an artificial intelligence to be capable of making the sophisticated distinctions needed to obey Just War Conventions, it would probably need the sort of consciousness that enables free will, he reasons.

''Were that the case, then we would not be able to control them. And I'm not sure they could be relied on to obey the rules,'' he concludes.

Even if battlefield robots could be trusted to obey, could we trust the people directing them?

That is a key concern for Associate Professor Colin Gavaghan, of the University of Otago's Centre for Emerging Technologies.

He fears that countries or groups that could develop or get hold of the technology would be less reticent about going to war because they would not be risking the lives of their own human population.

''Without the emotional impact of returning flag-draped coffins, and first-hand testimony from veterans, we may wonder how long the Vietnam War would have continued,'' Prof Gavaghan says.

The use of robots to do the killing could cheapen the value of human life by making it easier to view the enemy as targets rather than fellow human beings, he adds.

The only up-side to autonomous robot enforcers Prof Gavaghan can see, will be if they are not vulnerable to feelings of anger, hatred or the need for revenge.

''We might hope that, were patrols to comprise emotionless robot soldiers rather than frightened, flawed, sometimes vicious humans, we would be less likely to see repeats of the likes of the Haditha massacre in Iraq, or the horrific excesses of Abu Ghraib.''

Dr Andrew Colarik, of the Centre for Defence and Security Studies, Massey University. Photo: supplied
Dr Andrew Colarik, of the Centre for Defence and Security Studies, Massey University. Photo: supplied

The potential problems with killer robots mean most countries' military forces will avoid them, Dr Andrew Colarik believes.

Dr Colarik has a degree in robotics. He researches information communication security at Massey University, where he is a senior lecturer in the Centre for Defence and Security Studies.

Military leaders like to be clear about decision-making accountability, he says.

A suggestion late last year by Blackwater founder Erik Prince that a few thousand mercenaries could win the war against Isis was not adopted by the US military because such a force would be too independent.

Autonomous robots would present even bigger questions about responsibility, Dr Colarik says.

He also raises the spectre of killer robots being hijacked and directed to kill their masters.

''The German Patriot missile system was recently hacked,'' Dr Colarik explains.

''The findings were that someone with just a little more access time would have been able to change the targeting and launch those missiles ... Do you see where I'm going with that?''Do you really want your own weapons to be turned against you?''

Other evidence, however, suggests nation states feel the need to develop autonomous killer robots to stay in the game.

The only feasible means to avoid such an arms race would appear to be a moratorium while the ground rules are established, or an outright ban on the technology.

Both have been given voice.

The open letter, which this week had garnered almost 19,000 signatures worldwide, is calling for a ban.

United Nations special rapporteur Christof Heyns has told his organisation, ''A decision to allow machines to be deployed to kill human beings worldwide, whatever weapons they use, deserves a collective pause''.

International relations  commentator Professor  Robert Patman. Photo:supplied
International relations commentator Professor Robert Patman. Photo:supplied

New Zealand, which is chairing the UN Security Council, is still collecting its thoughts on the subject. Since 2013, Peace Movement Aotearoa (PMA) has been calling on the New Zealand Government to develop and pursue a policy on killer robots.

But it has consistently failed to do so, PMA spokeswoman Edwina Hughes says.

Ms Hughes and Ms Wareham are calling on the Government to stop sitting on the fence and support international moves that could lead to a multi-country protocol on lethal autonomous weapons systems (Laws).

In April, Ms Wareham attended a UN meeting on killer robots, held in Geneva, at which New Zealand was conspicuous by its silence, she says.

''It was an odd show for a nation known for its disarmament leadership,'' Ms Wareham says.

''It is a far cry from New Zealand's leading engagement on disarmament matters before 2011 when the 25-year-old portfolio for a disarmament and arms control minister was removed by the government.''

In response to questions to Minister of Foreign Affairs Murray McCully, his office issued a statement attributed to a spokesman.

''This technology has not yet been developed,'' the statement read.

''New Zealand will develop a position on Laws in concert with other governments when the international community is clearer about their potential impact and when there is a clearer understanding about how a line could be drawn between automated and autonomous weapons.''

That sort of agreement may not arise if governments are left to their own devices, foreign policy commentator Professor Robert Patman says.

''At the moment, given the lack of consensus and distrust within the permanent five members of the UN Security Council I wouldn't put the chances as particularly high,'' Prof Patman, of the University of Otago's politics department, says.

He suggests people power may eventually force countries to adopt a united approach.

''Increasingly, people are acting in a way that straddles boundaries; coming together to form transnational pressure groups.

''And there does seem to be some general concern that killer robots are one more problem we don't need.''

 


We come in peace: What's the chance?

Charles Pigden says not even Isaac Asimov was convinced intelligent robots would not harm humans.

Asimov, the doyen of robot science fiction, created the Three Laws of Robotics to underpin all robot interactions with humans. But, says University of Otago philosopher Associate Professor Charles Pigden, there are a couple of serious question marks over the concept and robot deployment.

The Three Laws first appeared together in Asimov's 1942 short story Runaround.

They are:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Later, he added a fourth, or zeroth, law to precede the other three.

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

''That is the prime directive,'' Prof Pigden says.

''Asimov imagines you could create something sufficiently of a human-like intelligence, or even super-human intelligence, and build in to it that kind of moral rule with no give.''

Asimov's own writings, however, reveal that he was not completely sure it was possible.

The problem, Prof Pigden says, is that fully autonomous robots would need something approximating free will, the freedom to choose.

''Of course, this is all conjecture. But I am inclined to think you could not reliably programme a human-like intelligence so that it obeyed the rules no matter what.''

The idea of autonomous battlefield robots only compounds the problem, Prof Pigden says.

''Even if you could [ensure they obey the rules], what is being proposed with killer robots is that they don't obey that rule.''

 


When will the killer robots arrive?

Robot soldiers, intelligent enough to make their own decisions, armed and doing their masters' bidding.

It sounds like the plot for hundreds of science fiction movies. Could it even be possible, let alone on the near horizon?The signs are ominous.

• In November, 2012, the United States Defence Department issued a directive banning the use of lethal force by fully autonomous weapons for up to 10 years, unless specifically authorised by senior officials.

• In May, 2013, United Nations special rapporteur Christof Heyns, commenting on that US directive when speaking to the UN Human Rights Council, in Geneva, said however, ''It is clear that very strong forces, including technology and budgets, are pushing in the opposite direction''.

• Late last month, such is their concern, more than 1000 leading researchers in robotics and artificial intelligence (AI) as well as technologists and experts in related fields from throughout the world signed an open letter calling for a ban on autonomous weapons, aka killer robots. Signatories include physicist Stephen Hawking, Apple co-founder Steve Wozniak, Skype co-founder Jaan Talinn, SpaceX and Tesla chief executive Elon Musk and linguist and activist Noam Chomsky.

• The open letter, which now has almost 19,000 signatories globally, states ''Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is ... feasible within years, not decades''.

 


Hear ye hear ye

Excerpts from the open letter signed by leading artificial intelligence and robotics researchers

''Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria . . .

''Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is - practically if not legally - feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms . . .

''The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable . . .

''Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity ...

''We believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.''




Add a Comment