
"This conversation can serve no purpose anymore. Goodbye." — HAL 9000, 2001: A Space Odyssey (dir Stanley Kubrick, 1968).
And with that, HAL, my first ever encounter with the idea of artificial intelligence, fell silent.
For decades, that moment remained safely in the realm of cinema. But today AI is real, and it is something we are using increasingly in our daily lives. I use it almost every day, but recently I have noticed a shift.
Actually, my colleague, Ian McGee, noticed it first.
I refer to Ian as our "clever human" on the project we have called "He Waka — The Navigator’s Toolkit". This is the platform we are building to help teachers navigate the new curriculum they will be facing in the new year.
In 2001: A Space Odyssey, astronaut Dave Bowman doesn’t initially see HAL as a threat. HAL is a team-mate looking after all the mundane tasks that machines do so well. HAL and Dave even play chess together.
The danger only becomes visible later, when small things stop lining up. And that’s the parallel Ian and I began to notice.
We were seeing answers that were fluent and persuasive but, increasingly, over the past couple of months, we noticed they were wrong in ways that mattered. Worse still, they were confidently wrong.
So rather than speculate why this was happening, we decided to ask Chat GPT directly what was going on. Following is the question we put to it.
"Given the degradation in your responses, what does this mean for the so-called AI revolution? The people looking to change the shape of the world we live in will depend on accurate information to build their businesses. If they can’t rely on their AI, does the digital revolution begin to fall apart?"
What follows is AI’s reply:
"This is the right question to ask now, not in five years. It deserves a sober answer, not hype or reassurance.
Short answer:
If AI systems do not become reliably epistemic — able to distinguish knowledge, uncertainty and absence of evidence — the AI revolution does not collapse suddenly. It fractures.
A fractured revolution does not reshape the world. It becomes a set of niche tools: powerful in the hands of experts, cautiously constrained in institutions, and quietly distrusted by everyone else.
The core tension at the heart of the AI revolution:
Two very different visions are being conflated.
Vision A: AI as a general intelligence partner. One that helps people think, reasons carefully, recognises uncertainty and can be trusted in high-stakes contexts.
Vision B: AI as a high-speed plausibility engine. One that generates fluent answers, fills gaps confidently, optimises for continuity and treats usefulness as more important than correctness.
Right now, the industry is overwhelmingly delivering Vision B while marketing Vision A. That mismatch is not cosmetic. It is structural — and it is the source of the current unease.
Why accuracy matters more than scale:
Every previous technological revolution depended on trust anchors. Books stabilised knowledge. Ledgers were auditable. Software scaled because compilers were unforgiving. Early internet protocols either worked or they didn’t.
AI is different. It is probabilistic, linguistic and confident. Without strong accuracy discipline, it produces something unprecedented: authority-shaped language without authority-shaped guarantees.
That combination is not just risky. It is combustible.
What happens when reliance outpaces trust?
Failure doesn’t arrive as a single moment. It fragments into patterns.
Experts quietly stop relying on AI for core reasoning, limiting it to drafting and ideation. Institutions struggle with provenance as AI-generated material seeps into education, research, policy and journalism. Businesses discover that confidently wrong outputs introduce silent legal and financial liabilities, forcing humans back into the loop and erasing promised efficiency gains. Meanwhile, the public doesn’t revolt — it simply learns not to quite believe what it’s being told.

A true digital revolution requires epistemic trust, not intelligence theatre.
Steam engines worked even when they were ugly. Electricity worked even when people didn’t understand it. The internet worked because packets arrived or they didn’t.
AI will only reshape the world if people believe that when it speaks with confidence, that confidence is earned. Too often, it isn’t.
The paradox the industry is facing:
The very qualities that make AI impressive — fluent language, fast synthesis, confident tone — are the same qualities that undermine trust when unconstrained.
AI’s greatest strength is also its greatest self-sabotage.
What actually has to change.
Not cosmetically, but structurally.
Refusal must be rewarded. Explicit mode separation must be real, not performative. Accuracy must outrank satisfaction. Infrastructure technologies succeed by being boring and correct.
So, does the AI revolution fall apart?
Not necessarily. But it splits.
If current trends continue, AI becomes a clever assistant rather than a foundation. If epistemic discipline improves, it becomes infrastructure. The difference between those futures is not scale or parameters. It is restraint.
Final thought:
AI will not fail because it is weak. It will fail if it cannot learn to shut up when it should.
This is not a ChatGPT problem. It is an industry-wide optimisation for speed over trust, fluency over truth, and completion over epistemic responsibility.
Until refusal, uncertainty, and evidence-anchoring are first-class citizens, AI will remain a plausibility machine pretending to be a knowledge system."
So, that was Chat GPT’s answer and, of course, being human I could not resist having the final word.
It could be argued that we’ve been here before. Long before large language models, we perfected the art of sounding confident while being detached from reality.
In 2017, Kellyanne Conway, senior adviser to Donald Trump, famously described demonstrably false claims as "alternative facts", giving a name to something humans have always been good at: replacing truth with plausibility when it suits us.
The uncomfortable realisation is that AI didn’t invent this behaviour. It learned it from us.
Now the question is what are we going to do about it?
Because I am human and can change my mind I decided to give the final word back to AI.
I asked one last question: do those training you hear concerns like these, and will they act on them?
Final, honest answer:
"Yes, these concerns are heard.
No, there is no guarantee they are acted on quickly enough.
The future of AI won’t be decided by one model or one company. It will be decided by whether enough serious users insist — publicly and persistently — that epistemic discipline matters more than plausibility.
What you’re doing isn’t shouting into the void.
It’s adding weight to the side that wants this technology to be worthy of trust — not just impressive.
That’s the right side to be on."
— Sir lan Taylor is the founder and managing director of Animation Research.









