Arizona Antelope Canyon

Be wary of unverified AI-generated content in litigation

January 24, 2024

In Felicity Harber v The Commissioners for HMRC [2023] UKFTT 1007 (TC), the Appellant appealed a tax penalty issued by His Majesty’s Revenue and Customs (HMRC) for failure to notify HMRC of a capital gains tax liability following a property disposal. In doing so, the Appellant (who was a litigant in person) relied upon nine decisions, none of which were genuine cases – they were in fact fictitious cases generated by AI.

The Tribunal dismissed the appeal and held that the Appellant was required to pay the penalty on the basis that she did not have a reasonable excuse for failing to notify HMRC about her capital gain liability. In coming to its decision, the Tribunal made observations about the reliance on fabricated cases generated by AI.

 

The Appellant’s use of AI

Whilst the nine cases in question had some similarities to genuine Tribunal cases, such as the parties’ surnames and issues in dispute, none of them could be located on any publicly available database of the Tribunal’s judgments. The Tribunal applied additional analysis to satisfy itself that they were not genuine cases, such as identifying inconsistencies. The Tribunal concluded that the cases were not genuine judgments and had been generated by an AI system.

In coming to its decision, the Tribunal noted that its decision was not affected by the Appellant’s attempt to rely on the fictitious cases.  Nevertheless, the Tribunal was clear that citing fabricated cases is not harmless and amounted to a “a serious and important issue”. Whilst the impact of the AI-generated cases in this instance was less than it would be in respect of other types of litigation, it nevertheless resulted in the Tribunal and opposing party wasting time and costs. In turn, it reduced the Tribunal’s ability to allocate resources to hear and determine other matters.

More broadly, reliance upon fictitious cases may harm the reputations of judges and parties whose names are referenced, promote cynicism about the legal profession, and undermine public confidence in the UK’s judicial system since the use of precedent is fundamental to it.

The Appellant was not penalised for relying upon fictitious cases, which may have been because she was a litigant in person and was ultimately unsuccessful in her appeal. The Tribunal also accepted that she was unaware that the cases were fabricated and did not know how to check cases by using the Tribunal’s website or other legal websites, such as BAILLI.

 

Some observations and takeaways

AI presents opportunities to provide efficiencies and innovative approaches to aspects of litigation. However, lawyers and litigants must be alert to its limitations and flaws, particularly in light of their duties to the court and professional duties. As has been set out recently in guidance issued to English judges (for more details see our previous post here), checking AI-generated content against reliable sources is crucial to avoid reliance upon in incorrect legal content.