Arizona Antelope Canyon

The use of AI in legal judgments

October 20, 2025

In VP Evans (as executrix of HB Evans, deceased) & Ors v The Commissioners for HMRC [2025] UKFTT 1112 (TC), a Tax Tribunal Judge, Christopher McNall, used AI in producing his decision regarding an application for disclosure by the Appellants. In the decision, the Judge confirmed “I have used AI in the production of this decision” and explained why and how he had decided to employ AI. This case provides an interesting example of the sort of approach judges may take when using AI tools to assist them in making decisions.

 

The Judge’s use of AI in this case

In this case, the decision made by the Judge was in response to an application by the Appellant for further disclosure of documents from HMRC. The application was made based on written submissions only, did not involve any oral hearings or evidence, and the Judge was not “…called upon to make any decision as to the honesty or credibility or any party.” The Judge commented that this made the application “well-suited” to use AI.

Additionally, the Judge made clear the limited way in which he had used AI, specifically only to summarise documents. He also clarified that he had “satisfied [himself] that the summaries – used as first-draft – are accurate” and that he had “not used the AI for legal research.” He carefully noted how he had complied with his judicial responsibilities to be impartial and independent, stating that “The judgment applied – in the sense of the evaluative faculty, weighing-up the arguments, and framing the terms of the order – has been entirely mine.”

The Judge acknowledged that “judges are not generally obliged to describe the research or preparatory work which may have been done in order to produce a judgment” but stated that he felt it was “appropriate, in this case” for him to disclose his process.

Under the relevant judicial guidance (published by the Courts and Tribunals Judiciary on 15 April 2025), there is no express requirement for judges to disclose the use of AI. However, the Judge’s transparency reflects a cautious and conscientious approach and may be emulated in future decisions or judgments prepared with the assistance of AI, especially as it would no doubt assist in alleviating any concerns that parties may have about the use of AI in such circumstances. For more details on the judicial guidance, please see our previous post here.

 

Benefits and disadvantages of using AI in legal work

There has been increasing use of generative AI tools in the legal industry in the past few years. There are undoubtedly numerous benefits to the use of AI, including increased efficiency and time saving, which need to be balanced against the fact that lawyers and judges will need to check and corroborate any material produced by AI tools. If the use of AI enables judges and arbitral tribunals to produce decisions with faster turnaround times, this may be to the benefit of all parties.

On the other hand, there are valid concerns surrounding the use of generative AI tools. Most notably, the prevalent concern regarding the accuracy of material produced. The English judiciary is required to follow the April 2025 guidance which requires that “the accuracy of any information […] provided by an AI tool must be checked before it is used or relied upon.”  With AI tools being relatively new to the legal sphere, there is currently a lot of focus on all parties involved in legal proceedings carefully checking the material produced with AI assistance. As we all become more comfortable with the use of these tools and the material being produced, it is important that a meticulous approach to this issue is maintained.

  

Looking forward

The use of AI in any particular case will depend on whether it is appropriate in the circumstances and the approach of the judge to the use of the new technology. In this case, the use of AI was limited to performing specific ancillary tasks (i.e. summarising documents) in relation to a procedural application. For obvious reasons, this cannot be compared to a decision or judgment concerning substantive legal or factual issues, or which involves evaluating oral submissions and evidence at a hearing. Moreover, the Judge confirmed that he was satisfied the AI document summaries were accurate (i.e. he had read the documents independently of an AI summary). In other words, there was no suggestion that AI summaries are a default choice or a replacement for human review of documents. Certainly, the value of parties writing full legal submissions would be diminished if judges and decision-makers did not read the submissions in full and started relying solely on summaries produced by AI tools.

This case certainly shows a positive and incremental step forward in the use of AI in the legal industry. Importantly, it highlights how AI tools can be used to support judges and lawyers in producing written material, balanced with the need for them to exercise their independent judgement and provide transparency as to the methods used.