
Since the election, returning councillor Lee Vandervis has touted results from AI during his research into newly elected councillor Benedict Ong, and fellow councillor Andrew Simms says he expects to use AI plenty in the next three years.
However, two Otago academics say drawbacks to the technology remain.
Cr Vandervis has previously called for the Dunedin City Council to streamline council functions by incorporating greater use of AI.
In the past week, he used AI-powered search engine Perplexity in his efforts to confirm Mr Ong’s credentials, concluding in a social media post he was unable to verify them.
Emeritus Prof and Centre of Artificial Intelligence and Public Policy director James Maclaurin said AI-driven tools often made mistakes when asked for biographical details about individuals.
"Unlike celebrities or historical figures with abundant public data, ordinary individuals often have little to no verifiable presence in the large datasets of web pages, books and articles used for training AI models," Prof Maclaurin said.
Without good information in its training data, an AI would likely resort to web searching and, without adequate information, would likely guess at an appropriate answer, he said.
Cr Vandervis did not respond to requests for comment specifically on his use of AI.
Mr Simms expected he, along with many fellow councillors and staff, would use AI frequently for research during the council term.
In a Facebook post last week, Mr Simms said generative AI chatbot ChatGPT had correctly predicted a close mayoral race between himself and now mayor-elect Sophie Barker and that votes from third-placed candidate Cr Vandervis would be critical.
"You have to take your hat off to it. It was saying that consistently from about six weeks out from the election," Mr Simms told the Otago Daily Times.

However, Mr Simms acknowledged AI could reflect what a user wanted to hear.
Using ChatGPT to predict the election outcome was "a bit of fun" — AI had not generated any content for him, he said — but it had obvious application for research.
"It’s hellishly efficient ... at one point during a discussion I wanted to get the full history of the Fortune Theatre. Now that's work that would have probably taken me [hours] five years ago; that was completed probably in 90 seconds," Mr Simms said.
"I don't think it should be seen as a sinister force, it can be a very, very effective time-saving tool."
Otago Business School senior lecturer Dr Mathew Parackal said generative AI were known to "hallucinate" by producing inaccurate or misleading information.
They could retain context within a conversation and repeated prompts could influence following responses, he said.
"In this case, the [election] prediction may have aligned with the outcome, but we only knew that after the event, which raises the question of its value — in other words, what is the point of a prediction if it can only be verified in hindsight?"
However, had the prediction been shared during the election period, it might have unintentionally influenced voters with possibly incorrect information.
Stating content was AI-generated served as a basic disclaimer but it was unclear if the public fully understood its limits, particularly the risk of hallucinations, Dr Parackal said.
"We cannot prevent individuals from using AI to for their own understanding or decision-making," he said.
"It is ultimately up to each individual to apply critical thinking and professional judgement when using AI-generated content."












