You are not permitted to download, save or email this image. Visit image gallery to purchase the image.
In a week during which police admitted fewer than 350 road deaths this year would be ``some sort of victory'', the notion of a road death becoming as rare as a plane crash has great appeal.
It's a no-brainer in fact. Let's do it now,
University of Otago Law Faculty Associate Prof Colin Gavaghan says.
``The road toll should not just be complacently accepted as inevitable. It is a problem to be fixed, not a fact of life.''
Except, nothing is as simple as that, as a University of Otago research project is fast discovering.
For that revolution in road safety to happen, New Zealanders would have to abandon the driver's seat and turn the vehicle over to a computer - the ``driverless car''.
That's where logic goes out the window and human fallibilities kick in.
Driverless cars, where an artificially intelligent computer assumes command of the vehicle, really could make drastic inroads into the road toll: human error is a huge factor in car crashes, and an AI won't drink, won't speed, won't be texting, will be buckled in, won't be turning around to chastise the children.
But there are costs.
Roads will need to be redesigned to accommodate how AI cars operate, and there is the not inconsiderable matter of having to upgrade the entire vehicle fleet.
And there are questions.
Do people really want to surrender their autonomous decision-making power to a computer. Indeed, should we?
Prof Gavaghan leads the Artificial Intelligence Project, a three-year, $400,000 Law Foundation-funded investigation of issues arising from society's rapid adoption of AI technology.
The project is looking at innovations, such as the driverless car, from three angles: Associate Prof James Maclaurin, from the philosophy department, considers the ethics and philosophy of science; Associate Prof Alistair Knott, from computer science, brings technical expertise, and Prof Gavaghan considers the litigation implications of going full-steam ahead down the AI road.
In negligence law, when someone is sued for an act or omission which causes harm, AI raises questions of who can be sued if a machine, rather than a person, has made a decision.
In criminal law many convictions are based on the ``chain of causation'' - a causal link between an act of the defendant and a subsequent crime - while many defences rely on a ``novus acus interveniens,'' a new act breaking the chain of causation.
Again, AI poses new issues for the courts. Can a machine be a defendant? Can the act of an intelligent machine break a chain of causation?
In both fields of law, if you can't sue or prosecute the machine, can you take action against its owner? Its programmer? Its manufacturer? The company providing software upgrades?
Artificial intelligence devices are designed to ``learn'' behaviour and their capabilities and actions will change as they learn. At what point might a machine have learned and changed so much it would not be fair to hold the manufacturer responsible any more?
``Issues tends to arise in the blink of an eye and the law is not very quick to blame people for decisions made in that context. It doesn't re-evaluate those decisions in the cold light of day and punish them if, in hindsight, we think they could have done something better,'' Prof Gavaghan said.
``Driverless cars puts us into a whole different paradigm because you can make those decisions in the cold light of day and you can programme the car accordingly.''
For example, if faced with a situation in which the car could either drive into a crowd of pedestrians or swerve to one side, potentially killing the driver, what should the car do?
Overseas research has shown that most people believe the car should swerve to avoid the pedestrians, but also that most people would not buy a car that would make that decision.
These are called ``trolley problems''. If you take your trolley down track A one awful thing happens, but is that preferable to taking the trolley down track B, where something else awful happens?
Those are often moral decisions. And for a driverless car to solve them, it will need to have a morality programmed into it.
But whose morality?
``We need to think about what mistakes do we want to programme the car against making at all costs,'' Prof Gavaghan said.
``If your AI in your car is programmed to be very cautious, then it might mean you are later for work more often, but no-one gets harmed ... we make those trade-offs between regulation of convenience versus occasional tragedy in society all the time.
``That's a decision about values, that's a decision about human values. A computer can't take that decision away. That will have to be built in to all this.''
While talking about driverless cars is inevitable - they are by far the most visible manifestation of AI - technology is already playing a part in road safety, and in regulation of individual behaviour.
Most regulatory regimes are built on a ``command and control'' model, where laws are set down and the authorities then try to catch law breakers, for driving, think breath testing and speed cameras.
Technology allows a ``prevention'' approach, where devices such as speed limiters or the alcohol interlock (which requires passing an alcohol breath test to be able to drive) aim to make it almost impossible to break the law in the first place.
``When we move all the way down the road to driverless or part driverless cars, then the possibility to impose a lot of other laws and decisions on people becomes considerably greater, and that's a discussion we're going to have to have,'' Prof Gavaghan said.
But, surely if it saves lives, it's a small price to pay?
That sounds fine ... except we are all humans, and we will find it difficult, both psychologically and emotionally, to allow our cars to be controlled by a machine, even if it is for our own safety.
``We have a control fallacy: people feel safer when they are driving than when they are flying or on the train, even though you can demonstrate statistically that they are not, but that's just the sense that people have,'' Prof Gavaghan said.
``I think the psychological and social rather than the legal and ethical issues will be a big part of this. For example, driving is a rite of passage for a lot of people, taking control over your life, and for that reason a lot of people will find it difficult to let go.''
Driverless cars are just a small, albeit high-profile, part of the AI group's work.
They fit within the group's work on the use of algorithms in a legal context.
Controversies in this area of late have included predictive policing, where algorithms are used to presume human behaviour and assess risk of criminal activity and reoffending, and the use of AI programmes in human and customer relations: the Accident Compensation Corporation's's use of such technology has come under close scrutiny in recent weeks.
The three-year study will go on to study the effect of AI on the job market; some suggest as many as half of all current jobs could be lost due to rise of AI.
``We're not anti AI. We think they have tremendous potential in a lot of contexts,'' Prof Gavaghan said.
``It's also important that when we're comparing AI or algorithmic decision makers, they shouldn't be comparing them with some notional perfect human decision maker, they should be comparing them against reality, which is that humans are biased, flawed and prejudiced and illogical lots of the time.
``It's not fair to hold the AI decision maker to a standard of perfection when we accept less than perfection right now.''