You are not permitted to download, save or email this image. Visit image gallery to purchase the image.
Will computers take our jobs, asks Michael Winikoff, as he explores the field of artificial intelligence.
Artificial intelligence (AI) is in the headlines - everything from revolutionising work to producing killer robots.
But is the field of AI really that advanced in the workplace, and what does it mean for the workforce?
Artificial intelligence researchers have been finding ways for computers to do tasks that are considered to require human intelligence, such as solving puzzles, playing chess, recognising faces, writing reports, analysing data, or driving a truck.
Some of these are tasks that humans are paid to do, and this raises some questions we should all be considering.
What happens when machines can do a significant portion of a person's job? What if a machine can completely replace a human in a particular job? Will AI make the whole concept of a job as we know it obsolete? Are we facing a future of mass unemployment?
Broadly speaking, there are two camps of opinion on the future of AI in the workplace.
The first, which I will call ''heard-it-before'', claims we are not facing radical change. It points at a history of false predictions that automation will affect employment. Despite past advances in automation, we are not living lives of leisure.
This camp argues that while disruption can make certain jobs redundant, new jobs are created. For instance, we no longer have an industry devoted to supporting horse-based transport in cities, but we have an industry devoted to supporting car-based transport.
The second camp, which I will call ''but-this-is-different'', claims that this time things are different. This time we are facing the prospect of software being able to soon do anything that humans can do.
If that happens, humans will become unemployable. This camp invites us to consider the development of the car from the perspective of the horse.
So, which camp is right? Actually, I would argue that both are wrong.
I'd also suggest that both points of view highlight the importance of expertise; specifically, the need for deep and diverse expertise to guide future development.
The ''but-this-is-different'' camp is wrong in that it tends to overestimate the capabilities of AI systems.
One reason for this is that non-AI-experts do not appreciate how hard AI is. In particular, some tasks that are easy for humans, such as recognising human faces, navigating a physical environment, or answering questions, can sometimes be hard, or even extremely hard to automate.
And tasks that require understanding of human psychology, emotional intelligence, or common sense reasoning remain unsolved and very challenging. For instance, answering the question ''Why are they smiling in this photo?'' with ''Because they are getting married'' requires an understanding of human customs, motivations, and emotions that are still a long way from being realised by machines.
Despite progress in some areas, human-level artificial intelligence is not close to being achieved. In fact, most researchers in artificial intelligence are not actually working on general human-level intelligence.
There are several arguments typically presented by the heard-it-before camp which are also questionable.
The economic argument suggests automation increases wealth by making production of goods and services more efficient. More wealth means more demand for goods and services, and therefore more demand for the jobs that create these goods and services.
This argument ignores short-term disruption and substantial issues relating to wealth distribution. It also relies on assumptions, the most relevant being that human labour is required to produce goods and services.
But this assumption will break down when we achieve human-level general artificial intelligence. Why hire a human if a computer can do exactly the same job, but much more cheaply and more reliably?
This error highlights the need for diverse expertise. An economist may not be aware of current and future capabilities of AI, but bringing together diverse expertise would avoid such misconceptions.
Having a discussion that included not just economists, but also sociologists (with a richer understanding of work) and, of course, AI researchers, might therefore help develop a better answer.
Answering questions about AI and society requires not just expertise in AI, but also in a broad range of business, humanity and science disciplines. Relevant expertise includes marketing, management, tourism, economics, ethics, philosophy, law, psychology, and sociology.
At Otago there are groups that bring together this sort of diverse multidisciplinary expertise to consider questions about AI, autonomous systems, and society.
Finally, it's important to consider not just the question ''How will AI affect society?'', but also ''How would we want AI to affect society?''.
AI creates opportunities and possibilities. What we do with them is up to us.
Will AI affect jobs? Yes. But exactly how is not predetermined; we collectively choose the future we want. And that's worth thinking about now.
Michael Winikoff is a professor in the department of information science at the University of Otago. His research examines software engineering, programming languages, and logic and formal methods to find better ways of creating software. This is the fourth and last article arising from the Business School's Future of Work Dunedin project.