
This scenario plays out in many movies (for example, The Terminator series) good and bad. But is it likely, or even possible? I contend that it is not.
The way tech billionaires and others speak about AI seems to presume that it is a definite thing, an entity of some sort. It is not. AI as a distinct entity does not exist.
Rather, AI is a form of computation (an algorithm) used to enable many different kinds of systems to perform both simple and complex tasks.
As a simple example, take the system that recognises a vehicle number plate and allows you to enter a car-parking facility. The computer matches the pattern of numbers and letters on the number plate to digital characters and stores these in its system, recalling them when you pay to leave.
But this system cannot do more than scan number plates and manage the parking time to be paid for. It will not get bored, though we would. It will not smile at an amusing personalised plate.
A banking application using AI may speed up your card payments, and enable complex financial transactions, but that is all it can and will do. It will not decide to take your money and invest it in Bitcoin for its own benefit.
No AI system can develop the desire to "escape" from its limitations and do something unintended by its programmers, because an AI system is just a computer program.
It has no "self" which can form intentions to do something new and different. It cannot develop a goal to pursue and ways in which it can achieve that goal. It cannot make decisions of its own other than those provided for in its code.
No "self" can ever arise from software code.
There is nothing to fear then concerning AI achieving global dominance and enslaving us, since it cannot decide to do so (on its own, that is — there is still the risk of a programmer designing something destructive in the AI code, but that proves my point — the AI itself cannot conceive of doing such a thing).
I argue that an AI system can never be more intelligent than the persons who designed it. Why is this? Because we cannot conceive of what something "more intelligent" would be like.
In order to be able to design an AI system which is more intelligent than us, we have to be even more intelligent than that system in order to know how to design a greater level of intelligence — which is self-contradictory and hence impossible. We cannot design an intelligent system that even matches our own capabilities, let alone exceeds them.
Claims to be able to do so depend on a very much reduced meaning of "intelligence", that is, an over-simplified concept of what intelligence is. Often the idea of intelligence is confused with mathematical computation and computational speed.
AI may be able to compute prime numbers with speed and accuracy beyond the ability of even the smartest human being, but so what? That does not make it intelligent, since it has no way of knowing what those numbers mean or why we want to compute them.
For the computer they are just symbols representing the output of its programming.
Even the impressive routine of the famous dancing robots from Boston Dynamics is programmed for each and every movement and step. Without that, you could play music all day and the robots would never respond.
Human intelligence, however, enables us to do much more than simply compute big, hairy numbers. We can also dance to music (without pre-programming our steps), we can play tennis, enjoy a novel, debate with friends, take a university course, design a flower garden, have a holiday at the lakes or go skiing and much more. All these require exercise of human intelligence in some form (see Welby Ings, Invisible Intelligence).
But the point is that a human being could do all these things. No AI can do so, and I contend that it never will, since it has no "self" to make decisions, set goals, undertake appropriate actions and enjoy the results. Not even all the AI systems working together will be able to enslave humans.
Fears that it might do so simply distract from the very real problems arising from over-enthusiastic embrace of AI, problems compounded by the illusion that AI is an entity able to make decisions and form intentions of its own.
AI has its place, and it is our responsibility to manage it accordingly.
— Chris Gousmett is a retired information management professional living in Mosgiel.










