Navigating the pitfalls of generative AI in legal proceedings: A call for guidance and awareness

With access to justice at such a low ebb, there are now more people than ever representing themselves at court – often through necessity rather than choice. The main resource available to them is The Handbook for Litigants in Person, a document created by judges which explains the stages and process of a civil case.

But the document, as it acknowledges in its preface, is “not comprehensive” and “cannot possibly be”. It is also 170 pages long.

Meanwhile, the use of Chat GPT is rising rapidly – and the software is becoming increasingly intelligent. So it’s hardly surprising that litigants in person are turning to generative AI, which can draft a legal argument for them in minutes, for help instead of the handbook.

However, what these litigants don’t realise is that using generative AI in this way comes with many legal pitfalls, and will likely cause more harm than good. But why?

Firstly, because AI programmes can suffer from “hallucinations” and create fictional citations – which waste valuable court time and money to be corrected, and can have a negative impact on the outcome of a case.

This creates issues for judges because the litigants who rely on these resources do not have any legal training, so they cannot be criticised for using AI – however inaccurate the cases it generates may be.

In the December 2023 Harber Tax Tribunal decision, for example, rather than criticising Ms Harber for relying upon nine hallucinated cases to support her appeal, the judge showed concern for the wasted time and resource in searching for non-existent cases. She also noted that the time wasted on the fabricated cases would have a negative impact on other cases progressing through the courts. Normally, litigants are heavily penalised for wasting court time or money – but judges accept that this approach isn’t fair in this context, given that the implications and risks associated with using AI in the courtroom are not immediately obvious.

AI hallucinations are not the only issue associated with its use amongst litigants in person. Another big problem is that the text it creates can seem accurate and convincing to those without legal training, when actually it says the wrong thing, or very little of substance. It can also mix up US and UK terminology, and civil and criminal law. Some subscription plans might be able to address these issues but this can’t be guaranteed, and few users will be aware that it’s necessary to use them in the first place.

So how do we combat these issues? We should firstly accept that AI can provide some useful support to litigants in person – especially when it comes to laying out a court document, or wording a particular argument. For that reason, we certainly shouldn’t be discouraging people from using AI in its entirety when preparing for a case. What we must do, however, is clearly communicate its drawbacks, and why they must always be kept in mind.

More guidance is needed from the courts surrounding the use of AI in the courtroom and its implications, including its benefits and limitations. More widely, the advice that is currently available to litigants in person should be updated to make it easier to use and digest. Ultimately, rather than simply discouraging the use of AI, we should be addressing the issue that is causing litigants to use it in the first place.

Want to have your say? Leave a comment

Your email address will not be published. Required fields are marked *

Read more stories

Join nearly 3,000 other family practitioners - Check back daily for all the latest news, views, insights and best practice and sign up to our e-newsletter to receive our weekly round up every Thursday morning. 

You’ll receive the latest updates, analysis, and best practice straight to your inbox.