
It comes after dean of learning and teaching Prof Tim Cooper said from next year Otago was looking to introduce a three-tier system for students.
AI use would be broadly broken down into three categories for assessments: restricted, guided and encouraged.
But communications lecturer Dr Rosemary Overell said she was wary about opening the door to a technology that was flawed.
"Each division, some in the humanities, has its own kind of response. So we have things like we make clear to students in our department at the beginning of the semester what's acceptable and what's not. For instance, in my class, it's not acceptable.
"I say, ‘I don't want to read from a bot. I want to read your voice and your words.’ I inform them that by using the bot, not only are they run by unethical corporations that are using huge amounts of power and are a huge environmental threat."
Dr Overell said she was wary about universities treating the use of AI as an inevitability.
"As somebody who's mid-career, it does worry me that potentially down the line, I'll be lined up alongside academics who have never read a book, whose PhD was partly or mostly written by a bot, and a bot that is entangled with a highly problematic, unethical business behind it."
Dr Overell said it was likely she would be moving back towards hosting more "pen and paper" assessments and exams as a means of limiting the possible use of AI.
"So they have to use their own words. But so we're clear about assessment at the beginning of the semester. Some of my other colleagues have done things like have them sign a cover sheet, which says, you know, ‘if you use AI, you must declare it and add a link to the chain of prompts you've given it’."
Dr Overell was one of 50 academics who last month signed a statement on AI on behalf of the Aotearoa Communication and Media Scholars Network, urging universities to halt the tide on AI.
"AI tools such as ChatGPT are contributing to mental-health crises and delusions in various ways; promoting the use of generative-AI in academic contexts is thus unethical, particularly when considering students and the role of universities in pastoral care.
"AI thus undermines the fundamental relationships between teacher and student, academics and administration, and the university and the community by fostering an environment of distrust."
The university uses anti-plagiarism software such as Turnitin, which matches students' essays with an archive of material to pick up where students may have copied material in a way that is a breach of academic integrity.
Turnitin has an AI-detection feature, but Prof Cooper said the university was yet to turn it on.
"The reason for that is that we do not want even one student to find themselves in a position of being wrongly suspected of improper use of AI because the tool is just not reliable," he said.
Teaching staff were alert to the possibility that students were using AI, he said.
"The reality is that some students might be so proficient at the use of AI that you actually can't tell."
University of Otago academic deputy vice-chancellor Stuart Brock said there was "a wide spectrum of views" on the pedagogical value of using AI in higher education.
"As part of our investigations and review at the University of Otago we have consulted and surveyed staff.
"The views expressed by the Communications and Media Scholars Network is one view and is not the consensus view across academic staff here at Otago, nationally or internationally."
The university continued to review things and no final decision about the use of AI on campus had been made, he said.