Use of AI in the Tribunal – Brendan McGurk KC

03 Oct 2025

VP Evans (as executrix of HB Evans, deceased) & Ors v The Commissioners for HMRC [2025] UKFTT 1112 (TC) is in all other respects an unremarkable case management decision of the First tier Tax Tribunal. The underlying appeals challenged Closure Notices issued by HMRC concerning Capital Gains Tax liabilities arising from tax planning arrangements involving offshore trusts and the application of double taxation conventions between the UK and New Zealand, and the UK and Mauritius. The application was brought by the Appellants for the disclosure of documents by HMRC. The application was dealt with on the papers and granted in part.

More remarkable was the Tribunal’s postscript, entitled ‘The Use of AI’, in which Tribunal Judge McNall indicated that he had used AI in the preparation of his ruling and explained for what purposes the AI was deployed. He said ([48]): “I have used AI to summarise the documents, but I have satisfied myself that the summaries – treated only as a first-draft – are accurate. I have not used the AI for legal research.” The Judge used Microsoft’s ‘Copilot Chat’, available to judicial office holders through the eJudiciary platform. All data entered into Copilot Chat on that platform remains secure and private.

The FTT’s decision follows publication, in April 2025, of “AI: Guidance for Judicial Office Holders” available here. That guidance replaces guidance issued in December 2023. The refreshed guidance expands the glossary of common terms and provides additional details on misinformation, bias, quality of datasets, and other areas of concern. Relevantly in the context of Evans v HMRC, it also advises judges to inform litigants that they are responsible for the AI-generated information they present to the court/tribunal, just as for any other type of evidence. The covering note to the refreshed guidance (signed by the Lady Chief Justice, the Master of the Rolls, the Senior President of the Tribunals and the Deputy Head of Civil Justice) states: “The growing accessibility and relevance of AI in the court and tribunal system means it is important that its use by or on behalf of the judiciary is consistent with its overarching obligation to protect the integrity of the administration of justice.”

In Evans, Judge McNall also noted the ‘Practice Direction on Reasons for Decisions’, released on 4 June 2024, in which the Senior President of Tribunals wrote: “Modern ways of working, facilitated by digital processes, will generally enable greater efficiencies in the work of the tribunals, including the logistics of decision-making. Full use should be made of any tools and techniques that are available to assist in the swift production of decisions.” The Practice Direction clearly covers AI, and was endorsed by the Upper Tribunal in Medpro Healthcare v HMRC [2025] UKUT 255 (TCC). The Judge observed that the disclosure application before him was well-suited to the use of AI since it was a discrete case-management matter, dealt with on the papers, and without a hearing. “The parties’ respective positions on the issue which I must decide are contained entirely in their written submissions and the other materials placed before me. I have not heard any evidence; nor am I called upon to make any decision as to the honesty or credibility of any party” ([43]). He concluded by noting that he was mindful that “the critical underlying principle is that it must be clear from a fair reading of the decision that the judge has brought their own independent judgment to bear in determining the issues before them”: see Medpro at [43]. To that end he observed that he was the decision-maker, and was responsible for the material created by the AI. “The judgment applied – in the sense of the evaluative faculty, weighing-up the arguments, and framing the terms of the order – has been entirely mine” ([49]).

It is right and proper for Courts and Tribunals to indicate where they have used AI and to what end. Questions will no doubt arise as to when a Court or Tribunal must give notice of how it is has been used, and what has been produced. One can well imagine arguments from litigants that they are entitled to see how, for example, an AI has summarised various matters and how the Judge has satisfied him or herself that it accurately reflects the materials. Analogies with the GDPR-conferred right to rectification might be drawn, albeit that any such analogous right would not be conditional on the processing of personal data. We are in the foothills of AI use by the judiciary. Empirical research is being undertaken to consider how litigants respond to various uses of AI by Tribunals. Transparency in the deployment of the least controversial applications of AI by Courts and Tribunals will begin to normalise AI use in these settings, meaning that we can probably expect more expansive uses of these tools in Courts and Tribunals in the not too distant future.

 

Search