Machines of the future

Elon Musk says Neuralink implants will initially cure various human 
...
Elon Musk says Neuralink implants will initially cure various human conditions, before turning humans into cyborgs who can compete with sentient machines. PHOTO: REUTERS
Tech experts this week called for a halt on AI development. It is the singularity they fear? Should we be worried? Dan Eady takes a look.

Since the 1960s, the concept of the "singularity" has referred to a point in the future when machine intelligence surpasses the ability of the human brain.

It seemed a distant prospect then. Not so now.

In fact, many scientists and tech leaders believe it will happen within the next few decades. In the past week, researchers at Microsoft were reported as finding "sparks" of artificial general intelligence (AGI) in a chat bot. And just days later more than 1000 people, including tech sector leaders, called for a pause on AI development until we can be sure it is safe.

So, what is the state of play and what are the concerns? Should we fear the singularity, or embrace it? What are the implications of this technology for our society and our way of life?

University of Otago professor of philosophy and Otago’s Centre for Artificial Intelligence and Public Policy co-director James Maclaurin explains that the singularity is a "runaway acceleration in artificial intelligence" that will be caused by machines learning to optimise themselves once we’ve reached artificial general intelligence.

AGI is the term for machines that are more capable than humans in all of our most important cognitive abilities. While many scientists believe that AGI will be achieved within the next few decades, Maclaurin cautions that humans (including scientists) have a terrible record of predicting scientific progress. Nevertheless, he argues that large-scale research into AI safety is important.

Maclaurin points to recent advances in generative AI, such as tools like ChatGPT, as evidence that AI has tremendous potential for both benefit and harm long before we reach the singularity. While ChatGPT is impressive in its ability to collate information, explain complex ideas, compose text and hold very human-like conversations in real-time, it has significant limitations.

For example, it cannot change its beliefs based on perception, plan or reminisce and it famously doesn’t really care about the truth; it cares about conversing.

All the same, generative AI is likely to change the way many people live and work, with potential benefits and risks.

University of Otago interdisciplinary researcher Dr Olivier Jutel takes a more critical view of the singularity.

Jutel says that the spiritual beliefs and AI prophecy of the tech industry — which once promised utopian futures built on tech solutions to all our problems — have become tied up with the imperatives of tech-capitalism. He is sceptical of the claims of AI optimists such as Ray Kurzweil and Sam Altman, as well as dystopians like Elon Musk.

Prof James Maclaurin. PHOTO: SUPPLIED
Prof James Maclaurin. PHOTO: SUPPLIED
Musk is on record suggesting our best option will be to team up with AI, once the singularity arrives, as some sort of augmented organism. He hopes to achieve that with his Neuralink implants.

"When Musk smokes weed on Joe Rogan and talks about protecting humanity from runaway AI, he is really saying ‘put me in charge’ and ‘would you like to participate in my Neuralink trials?’," Jutel says, referencing a notorious podcast appearance.

Jutel sees these AI investors and boosters as more interested in their own power and status than the welfare of humanity.

University of Otago professor of computer science Brendan McCane believes that the singularity is something we should take seriously. He points out that even if we don’t reach AGI, AI is already having a significant impact on our society and our economy.

McCane argues that we need to be proactive in managing the risks of AI, including job displacement, ethical issues and the potential for misuse. Interdisciplinary research is going to be important in this area, as well as the need for ethical guidelines and regulations, he says.

Indeed, further afield, prominent AI researcher Dr Timnit Gebru, fired from Google’s AI Ethics team in 2020, also argues that AI should be developed and deployed with a focus on its ethical implications. She, too, criticises the capitalist motivations of Silicon Valley, which she argues are primarily focused on making AI a profit-maker. Gebru posits that we need to develop AI with a human-centred approach that considers the potential harm it could cause to society.

If there’s a sense of genie-out-of-the-bottle inevitability about some of this, then University of Otago professor Tony Savarimuthu offers at least some hope. He is an expert on multi-agent systems and software engineering.

Savarimuthu put forward a suggestion on the singularity, but only after he had it critiqued by ChatGPT-3.

"Its main challenge will be the ability to control and co-ordinate distributed, physical resources. While this may be easier in some domains which are heavily reliant on computer-based systems (e.g. energy distribution), it will be challenging in many others which are harder to control and predict (such as food production), that are decentralised and are subject to different uncontrollable elements (e.g., weather, diseases).

"If at all we reach such a point, we may hold the key to turn off such systems in case they were to go haywire ..."

So, as long as the AI remains contained in the box on your office desk, you can always turn it off.

An issue our thought leaders agree on is the importance of transparency in AI. We have to recognise the limitations of AI systems, including their biases and errors, and that we need to be transparent about how they are trained and deployed. This transparency is critical to ensure AI systems are developed and used ethically. In the United States, there are cases where facial recognition technology has falsely identified innocent people as criminals, leading to wrongful arrests among black men. University of Utah and Northeastern University researchers discovered that, because of a flawed data training set, AI-powered hiring software developed by Amazon was biased against women.

This is where the need for collaboration and interdisciplinary approaches to AI development comes in. Bringing together experts from different fields might ensure that AI systems are built with a deep understanding of the social and ethical implications of their use.

Similarly, there are growing calls for responsible AI governance that prioritises ethical and social considerations — underlined in this week’s call for a halt on development.

Leading the charge in this field of governance is the Distributed Artificial Intelligence Research Institute (Dair). Not coincidentally, Gebru is the founder and executive director.

"Instead of constantly working to mitigate the harms of AI research performed by dominant groups without an analysis of potential risks and harms, we encourage a research process that analyses its end goal and potential risks and harms from the start," the institute says.

Dair argues that only AI systems designed with human-centred values in mind can allow it to enhance and augment human capabilities, rather than replace, undermine or discriminate against them.