Measuring the wrong thing

Most legal professionals now use AI tools for work, more than double the figure from a year ago. Firm-wide AI deployment almost doubled in the past year. Law firm technology budgets grew the fastest the industry has seen.

By any traditional adoption metric, legal AI is a success story. Except for one problem. Few organisations actually track the ROI of their AI tools.

A user who uses CoPilot regularly as a calculator, to build a recipe for the evening's dinner or update their CV is an active user. A lawyer who uses it once a week on the right task and saves three hours of non-billable time barely registers. Usage metrics track activity, but is largely silent on any tangible outcome.

The industry has spent two years celebrating adoption rates while ignoring the question that matters: is any of this translating into results that clients actually care about?

Five metrics to pay attention to

None of these show up in a standard usage report. All of them matter more than login rates.

01

User experience

Do lawyers trust the output enough to use it? Or are they re-doing the work from scratch and using the AI output as a cross-check? If it is the latter, you have not saved time. You have added a step.

02

Efficiency

Has the time to complete a specific task actually gone down? Or has the burden been transferred to a senior lawyer verifying non-explainable outputs? If you cannot point to a number, and track it per matter, you're relying on biassed evidence.

03

Pricing

Can you price work differently because of what the tools make possible? The answer may be yes or maybe no, but the ability to explain why with transparency is what many law firm clients are looking for.

04

Margin

Law firms mostly sell time for a markup, and gross margin is the key metric to track financial health. AI tools are not cheap. If the efficiency gain does not exceed the licence cost plus the time people spend managing and learning it, the business case is purely strategic market positioning: a short-term view at best.

05

Quality

If AI can't improve the quality of output, and client experience, it should. The most forward-thinking law firms are reinvesting time saved by AI back into improving the client experience with additional value adds.

The verification tax

There is an anecdotal trend of senior time increasing as a result of AI-generated first drafts. Junior lawyers use it to draft documents, summarise contracts, prepare first-pass research. Then seniors spend as long checking the output as they would have spent doing the work themselves.

One reason for this is underlying context is not good enough. Searches surface misfiled documents, empty metadata fields, unmanaged precedents or mash together examples from different jurisdictions.

The output looks plausible, and gets copied and pasted into the first draft. Experienced lawyers see this cognitive surrender immediately, but the clock is ticking so they rewrite the draft themselves to get it out the door on time.

The fix is not a better prompt template or a newer model. It is better data. Curated precedents, clean metadata, properly filed documents, maintained knowledge content, and crucially, explainability of why the source material was chosen. This is boring, unglamorous work that most firms have been putting off because the manual, human-led system didn't need it.

Deployment is not adoption

Legacy deployments make the tool available, deliver training, and move on. A few keen people are doing good things with it, and everyone else treats it as optional.

Adoption means a litigation team has identified the three specific tasks where AI actually helps their workflow - not generic use cases from a vendor slide deck, but the things their lawyers do every week on real matters. Those tasks have been tested, refined, and built into how the team works.

This re-wiring of legal process does not happen overnight, and has to happen practice group by practice group. A litigation team and a banking team do not use AI for the same things. A single firm-wide training session that shows everyone the same demo is useful, but getting true adoption is a continual improvement that firms need to be able to resource properly.

Three pillars

Business

Strategy connected to financial outcomes

"We want to be innovative" is not a strategy. "We want to increase gross margin, win more work and increase number of matters per fee earner" is. Without that kind of specificity you cannot measure success and you cannot justify the spend to anyone who controls a budget.

People

Judgement, not just skills

Can your lawyers tell the difference between a task where AI adds value, one where it creates risk, and one where it is the wrong tool entirely? That judgement - knowing what to delegate and what to do yourself - is the skill that matters. Most training programmes gloss over this.

Infrastructure

Data that is fit for purpose

Documents filed and tagged properly in the DMS. A reliable bank of precedents that are curated and current. Know-how that is accessible and maintained. If the system is a dumping ground with 15 years of unfiled documents all labelled "DOC," the AI will reflect exactly that.

The unanswered commercial question

Firm leadership frames AI as a technology decision blinkered to technology selection, which is really the easy part.

Clients are asking what AI means for pricing. Some are direct about it: if your lawyers are using these tools, why does the bill look the same? Others are subtler, requesting breakdowns of time spent against value delivered. Others are simply striking out invoice line items where they deem AI should have been used instead. Ironically, they're using AI to do it.

If you cannot show where technology has changed the economics of a piece of work, firms risk clients taking matters into their own hands (pun intended), by insourcing work or prioritising panel firms who can.

Real champions

Every firm has a handful of people who figured it out on their own. The naturally curious are getting results, but exist in silos, and are not by any means the default persona.

Readiness means knowing the full picture of adoption. Some lawyers won't use the tools at all and have no interest learning, but will play nicely in training. Another group genuinely adds value with them, and is very willing to share success stories. You need to know the mix of personas, and align incentives before you can plan anything sensible.

It also means a champions programme that goes beyond enthusiasm. Champions have the right incentives, and are embedded in practice groups, working on real matters, testing workflows with their teams.

Where to start

  1. Pick a business outcome. Not "deploy AI" but a measurable commercial goal tied to client work.
  2. Assess gaps honestly.
  3. Start with one practice group. The one with the strongest business case and the most manageable data. Do not try to move the whole firm.
  4. Build the measurement framework before you start. If you cannot define what success looks like in terms of the five metrics above, you will never prove it worked.
  5. Fix the data for that group. Curate precedents, clean metadata, file documents properly.
  6. Test on real matters. Not demos. Not sandboxes. Real client work with real deadlines. Measure the results.
  7. Expand from evidence. When you have proof one group benefited, use that to bring the next group on board.

Real readiness

Readiness in 2026 asks a new question. Can the firm actually benefit from what it deploys? A firm with clean data, a clear strategy, and honest measurement is better positioned than one that has rolled out tools everywhere and cannot point to a single efficiency gain.

The firms in the strongest position two years from now will not be the ones that moved fastest. They will be the ones that got the foundations right first.