Professor’s talk on AI ‘excellent’

nz_most_trusted_2000.png

Emeritus Professor Anthony Robins. PHOTO: SUPPLIED
Emeritus Professor Anthony Robins. PHOTO: SUPPLIED
The Hampden community recently got the chance to pick the brains of University of Otago emeritus professor Anthony Robins on all things artificial intelligence (AI).

Powerful AI is quietly and rapidly infiltrating many aspects of people’s lives online, such as through AI-generated actors attempting to sell products and services.

Hampden Community Energy Society (HCES) chairman Dugald MacTavish said it hosted the event at the Hampden Hall in August, with guest speaker Prof Robins, to help the community better understand the implications of AI and its pros and cons.

Prof Robins gave an "excellent and well-structured talk" to about 27 community members on what AI existed and how it worked, how to take advantage of the pros and how to identify it, Mr MacTavish said.

"Prof Robins discussed AI’s current impact and what lies ahead, how to navigate risks and embrace benefits and opportunities for learning and local business."

Prof Robins has been a lecturer in AI in the department of computer science at the university from 1989 to 2025.

His main areas of research are cognitive science, neural network models of cognition and computer science education.

Prof Robins said some of the advantages and risks of AI arose directly from the way that AI systems worked.

He noted advantages as: AI could quickly produce text, images and videos; it could learn to do repetitive tasks; and it could find patterns in large amounts of information that humans had not noticed yet, that was in some sense creative.

As for risks: the outputs of AI were factually unreliable, often called "hallucinations"; the outputs were often biased because the information used to train AI systems contained bias, such as sexism and racism; and AI systems themselves consumed a lot of energy and water and had bad environmental consequences.

"The main advantage is that in many cases, AI allows people to be more productive within their area of expertise, but unfortunately, the way AI is currently being used involves many risks beyond the inherent environmental impact.

"AI systems are being trained on copyrighted material and other creative works without acknowledgement or compensation.

"It is being used to replace some workers, in many cases unsuccessfully. It is being used to create fake people on social media, often for propaganda purposes, and fake news or videos.

"It is having a damaging impact on education.

"The list goes on and on," Prof Robins said.

HCES member Alison MacTavish said she had learnt AI was based on all the inputs humans had entered into computer systems and would simply regurgitate a mix of those.

"In other words, it mimics human intelligence but is not intelligent itself," she

said.

Prof Robins expanded further on examples of code and image generation, chatbots and automation by looking at three areas: bias, bots (up to 50% of "people" on social media are AI-generated bots) and "hallucinations"(ridiculous answers given by ChatGPT, for example).

"At the moment, AI is controlled by billion-dollar companies and used by many people and organisations as a way of making money or influencing people without regard to the social or environmental costs."

The evening went well, with many questions and several discussion points, Prof Robins said.

One was around whether AI systems would ever have feelings or emotions.

"For the future, who knows, but for systems based on current technology — no, not at all," he said.

Another was around potential ways to make AI less environmentally damaging, which "would require some technological breakthrough that made computing chips vastly more energy efficient", Prof Robins said.

One idea that had been explored at the University of Otago was a future where all computation was solar powered, continually migrating from server to server so as to always be taking place on the side of the planet that was in daytime.

And lastly, there was a discussion around ways of making AI safer.

In his opinion, AI needed to be regulated by governments in the interests of people, not controlled by a few large companies in the interests of profit.

"This is very difficult to achieve because technology always moves quicker than the law, and because of the enormous political influence that technology companies wield," Prof Robins said.