AI is more efficient but can harm lawyer-client trust

When lawyers use AI, clients and young lawyers carry the risk, Bruce Curtis  writes.

Artificial intelligence is now being quietly embedded across the legal profession.

Law firms are using it to research cases, draft documents, summarise evidence and generate advice. In many firms it seems almost certain AI is already being used in ordinary legal work, even where its role is not openly acknowledged.

This shift is often presented as a benign efficiency gain: faster work, lower costs, modernisation.

But in law, efficiency is never neutral. It redistributes power, risk and responsibility. And as AI becomes normalised in legal practice, the benefits are flowing upward to firms, while the costs and dangers are being pushed outward on to younger lawyers and clients.

The issue is not the use of AI itself, but who carries the risk when AI-assisted work is wrong, incomplete or inadequately supervised.

This matters because legal services are not just another professional offering. They are the gateway to rights, remedies and protections that most people cannot access on their own.

When something goes wrong, the consequences are not minor. They are lost claims, missed deadlines, adverse judgements and financial or personal harm that cannot be undone.

Against that backdrop, use of AI in legal work should be treated as a public issue, not a private innovation choice.

For decades, the legal profession trained new lawyers through work that was repetitive, detailed and essential. Junior lawyers learned their craft by researching case law, drafting submissions, reviewing documents and checking arguments.

This work was not glamorous, but it was how judgement was built.

AI now performs much of that work faster and more cheaply. Tasks that once took hours can be completed in minutes.

Drafts appear instantly. Summaries are generated on demand. From the firm’s perspective, this looks like progress.

From the perspective of training, it is a structural break. Fewer junior roles are needed. Less hands-on supervision occurs. Experience is assumed rather than developed.

Young lawyers are still expected to exercise judgement, manage risk and take responsibility, but the work that once taught them how to do so is disappearing.

The likely result is not a more capable profession, but a thinner one. A smaller number of senior lawyers supported by tools rather than people, and a growing pool of under-trained juniors pushed out or left behind.

This is not just a labour market problem: it is a competence problem that will surface years later, when today’s gaps in training translate into tomorrow’s failures in judgement.

When law firms adopt AI, work is done faster. Fewer staff are required. Margins improve.

But when AI produces an error, the risk does not stay with the tool or the firm. It lands squarely on the client.

AI systems can fabricate authorities, mis-state the law, omit relevant context or generate plausible but incorrect reasoning.

Lawyers are aware of these limitations. Clients generally are not.

Clients are rarely told how much of their work involved AI, what checks were applied or where responsibility lies if something goes wrong.

Even if bills are marginally lower, liability exposure is not. A missed deadline or flawed argument can still derail a case. The firm’s internal savings do not translate into shared risk.

The lawyer-client relationship is already unequal.

Lawyers understand the law: clients do not.

AI deepens that imbalance. A client cannot tell whether their advice was produced through careful human reasoning or automated pattern matching. They cannot meaningfully challenge the method, only the outcome.

And without clear disclosure requirements, they may never know how their case was handled at all.

This matters because trust without transparency is not trust. It is dependency. When clients cannot assess how advice was generated, they lose even the limited leverage they once had.

They cannot ask informed questions. They cannot judge value. They cannot compare services meaningfully.

The profession often frames this as unavoidable complexity. In reality, it is a choice. Lawyers decide how AI is used, how it is supervised and what is disclosed.

Clients bear the consequences of those decisions without being part of them.

Law is not simply information retrieval. It is judgement exercised under conditions of responsibility. Context matters. Ethics matter. Consequences matter.

AI does not understand those things. It produces outputs based on patterns in data. When those outputs are treated as authoritative, legal judgement begins to look like a commodity rather than a responsibility.

This creates a dangerous blur. AI-generated advice often sounds confident and polished. Errors are harder to detect because they are embedded in fluent language.

When something goes wrong, responsibility becomes harder to trace. Was the error the tool, the junior lawyer or the partner’s supervision?

Legal regulation moves slowly. It relies on consultation, precedent and incremental change. It is also largely shaped and enforced by the profession itself. AI adoption is moving much faster.

AI tools are already embedded in everyday legal practice. Regulation has not caught up. There are few clear rules on disclosure, quality standards or liability when AI is used.

In the gap between adoption and regulation, clients are exposed.

This is not a hypothetical concern. Once practices become normalised, they are difficult to unwind.

By the time clear rules emerge, harms will already have occurred. And as with past reforms, the profession will argue that change is disruptive, expensive or threatens independence.

AI could expand access to justice. It could lower costs, improve consistency and widen availability — but the upside will be captured by firms. The downside will be borne by young lawyers and clients.

This is not a reason to reject AI. It is a reason to insist on rules, disclosure and accountability before harm becomes routine.

  • Bruce Curtis is an honorary professor and independent researcher, University of Waikato.