
The Otago Polytechnic information technology lecturer and researcher was recognised as an AI Research Pioneer at the inaugural awards, held recently in Auckland.
The award honoured his outstanding original research in artificial intelligence, which advances AI knowledge, tools or methodologies and contributes significantly to the field.
"I was surprised by the award.
"There are many researchers in New Zealand doing excellent work in AI, and I know several who are just as deserving, if not more so, of this recognition.
"I feel both humbled and grateful to have received this recognition."
Associate Prof Rozado said his research looked at how to measure "epistemic integrity" in AI systems.
"That is, what criteria should we use to determine whether an AI system is a strictly value-neutral, truth-seeking data collector and synthesizer as well as hypothesis tester?
"Epistemic integrity refers to principles widely accepted in the philosophy of science — consistent application of standards when evaluating information, skill at perspective taking and gold-standard metrics of truth resolution such as precise recall of factual information and accurate probabilistic estimation of uncertain future events.
"Maximising epistemic integrity in an AI system should be our North Star goal if we want truth-seeking AI systems."
His experimental work had uncovered evidence of occasional lack of principled reasoning by AI systems — that is, evidence of epistemic distortion, rather than epistemic integrity, he said.
"Arbitrary factors such as the order of elements in the AI prompt and other non-relevant heuristics, seem to have a substantial effect on the system’s outputs.
"This suggests that a non-negligible fraction of what we perceive as ‘intelligence’ in modern AI systems might be sophisticated statistical tricks, sort of like a magician creating an illusion of something that is not really true."
A good illustration of this phenomenon was an AI medical diagnosis system, which was trained to detect pneumonia on chest X-rays collected from multiple hospital systems, he said.
"The AI quietly learned to identify the individual hospital where images originated, from pixels in image corners, where scanner-specific markers tend to be located.
"Once the model inferred the hospital, it could then lean on hospital-specific disease prevalence — that is, different patient populations across hospitals — as a shortcut for diagnosis rather than true pathology detection."
The most recent models, like the recently released GPT-5, seemed to score higher on epistemic integrity benchmarks than the original ChatGPT 3.5 released in November 2022, Assoc Prof Rozado said.
"So, in some respects, these systems are improving fast."
He was proud of the award and the recognition from the industry.
"This award highlights that meaningful research is happening at polytechnics.
"At a time when our sector is under significant pressure, it is important to remember that education and research should not just be measured in financial terms."











