Comment permalink

Artificial Intelligence is something we need defending against, like climate change or invasion by a foreign power, writes Paul Tankard.

I first came across the term AI in The Lord of the Rings.

In the caverns of Khazad-Dum, the Fellowship of nine are pursued by an army of orcs to the edge of a great chasm, and through the ranks of orcs they see a ``a great shadow, in the middle of which was a dark form, of man-shape maybe, yet greater; and a power and terror seemed to be in it and to go before it.'' As it leaps towards them, only Legolas the elf recognises it.

``Ai! ai!'' wailed Legolas. ``A Balrog! A Balrog is come!''

The whole passage is one of the great moments in literature. But the deep meaning of Legolas' cry has only just dawned on me.

Sadly, the elves have headed west, and if we are to defeat threats to humanity almost all of which we have invented ourselves we need to use our own ingenuity.

Artificial Intelligence, so-called, is a contradiction in terms. Intelligence is a quality of minds, and minds are a function of brains. A better name for it would be ``machines taking desk jobs''.

You might imagine that the jobs of university lecturers being fundamentally intellectual work would be immune from being taken over by machinery, but not so. For a start, much of our work actually involves dealing with bureaucracy and technology. And with the planned reduction of ``support services'' at Otago, I fear we'll increasingly become the bureaucrats and technocrats ourselves.

A few months ago, my department was invited to trial an automated exam writing and marking system. It seems relatively harmless, although it would potentially deny me seeing my students' actual grammar and spelling, which are arguably informative. Although I wouldn't mind being relieved of the burden of their handwriting.

But there are people in many places working on essay marking software, under the generic name Automated Essay Scoring (AES). Essay marking is, of course, the closest that academics in the humanities get to professional drudgery (I won't pretend that there aren't worse jobs).

And I teach essay writing so, by the end of this semester, I will have read, marked and commented on 353 essays. I do groan about it, on occasions. But I absolutely do not want the precious work of my precious students to be read and assessed by a device.

My students are special, but they are not specially special. Specialness applies to all humanity: bank tellers, librarians and checkout staff. I do not use the self-checkout at Countdown. No-one is advantaged by having relationships with machinery.

Machines have always taken jobs. It maddens me that technological change is figured, as much by its victims as its apologists, as inevitable and remorseless. There is almost nothing that scientists and technocrats have worked out how to do that hasn't been done. Though we have until now done a pretty good job postponing nuclear war.

But the only approach that governments seem to have to AI is analogous to the ``harm minimisation'' approach to crack cocaine.

I have a better suggestion. The proponents of AI are interested in a) technological advancement for its own sake, and b) making money.

The corporate entities to which they sell their products are only interested in the latter. They expect that AI will achieve this by savings on wages, that is, by putting people out of work.

My argument would be that AI is, like climate change or invasion by a foreign power, something we need defending against. And defence of people at a national level is the bottom-line responsibility of government.

So, here's what a government that really wants to help people should do about AI indeed, about all and any technological change that is intended to result in job losses.

The Government ought to ascertain how much money corporations expect to save by introducing AI, and then pay them not to do it.

There would be a cost; but then, all wars cost. The cost would be more than recouped by a) keeping on workers who will also pay taxes, and b) savings on retraining, counselling, unemployment benefits, policing and all the costs of having large numbers of people with nothing productive to do.

The purpose of work is not just to get things done. Work is also so that people are connected in society by bonds of mutual dependence; work is to enable us even to compel us to mix with other different people, to the benefit of our souls; and work is to give us all something better to do day by day than eating, relationship-hopping and binge-watching stuff on Netflix.

To look after people is the fundamental human calling, and one of the means we have developed to deal with things we can't deal with each on our own such as military invaders, economic invaders, technology powered by capital, and marauding Balrogs, is government.

But hey, what do I know? I don't want to be saved the trouble of being intelligent. I'm a humanist.

Dr Paul Tankard is a senior lecturer in the University of Otago's department of English and linguistics.

 

Comments

Work is not neccessarily monotonous and drudgery. It can be inspiring, eye-opening, interactive, and improve knowledge, understanding and wisdom.

In 'Meetings with Remarkable Men', Gurdjieff and a companion take a paquet on a Pilgrimage. The other man becomes interested in the engine room, and stays on, as a Motorman, while Gurdjieff continues his Quest.