There’s a lot the legal industry can learn from the next generation, particularly when it comes to AI. Gen Z grew up with technology at their fingertips, and are the first generation of lawyers to enter the sector while it is being digitally transformed.
Often dubbed the ‘AI generation’, this new wave of lawyers is providing a unique perspective and understanding of how new technologies can be used in legal practice. However, we must remember that no matter how tech-savvy they may be, junior and aspiring lawyers are by no means exempt from the multitude of moral and ethical risks that come with using generative AI.
So how can we ensure that the next generation of lawyers is able to harness the full potential of AI, while also mitigating its potential risks?
Universities, of course, play a huge role in providing the foundations and educating law students so they’re fully aware of both the opportunities and challenges associated with generative AI, and how it works alongside their conduct duties and obligations to the court during the vocational stage of training.
But law firms also have obligations and responsibilities. While universities will equip students with the knowledge they need to use generative AI safely, it’s up to law firms to help them put this into practice.
To start with, firms need to consider how susceptible their recruitment processes are to generative AI. The use of software such as ChatGPT in training contract applications has soared in recent years, creating headaches for many law firms, and ultimately not benefitting anyone if hiring mistakes are made as a result. Because of this, it’s really important for employers to clearly state their expectations with regards to the use of AI in application processes. It’s also worth considering other ways of evaluating candidates – such as practical or verbal assessments, which can’t be AI-generated.
Before junior staff are enrolled, their colleagues should already be aware of all the challenges AI presents and have clear policies and procedures in place. That way, they can properly communicate these expectations and set a good example to new joiners. Although senior staff are less likely to rely on AI for their work (65% of AI users are millennials or Gen Z, while 68% of non-users are Gen X or Baby Boomers), they still need to know how to use it in the right way, as younger colleagues will look to them for guidance. There are various factors to consider when it comes to generative AI: for example, client confidentiality and the fact that it can hallucinate cases and get US and UK law confused, which is why it shouldn’t be relied upon for drafting court documents or researching case law. These issues need to be front of mind for everyone and communicated to junior team members when work is delegated to them.
Likewise, firms must be up to speed with the differing regulatory approaches to AI – the UK and EU standpoints are quite different, for example. This is particularly important if law firms operate across different jurisdictions.
These are just some of the points which need to be taken into consideration. We shouldn’t try to prevent junior lawyers from using AI; rather, firms have a responsibility to ensure it’s being used responsibly. That way, we can fully take advantage of what AI has to offer – including the opportunity for different generations to learn from one another.