Given advances in generative AI, would a person in court assisted primarily by AI be able to outperform a regular human lawyer? Australian law firm, Lander & Rogers, decided to find out.
The mock trial at SXSW Sydney was judged by Professor David Lindsay from the University of Technology Sydney and used AI tool NexLaw, supported by Ken Leung, a lawyer on Lander & Rogers’ Digital Economy team.
The experiment, hosted by Courtney Blackman, director of the LawTech Hub at the firm, used ‘a relatable traffic offence’ as the sample case. In this particular example it involved the defendant allegedly using a mobile phone while driving, ‘providing a straightforward matter for both human and artificial intelligence to highlight their capabilities’.
And in terms of how it played out, one of the firm’s lawyers Jeanette Merjane represented a randomly selected audience member, while a second randomly selected audience member was self-represented using the NexLaw tool, with some additional prompting support from Leung to make sure all went smoothly.
Here is what happened.
As the law firm organising this explained: ‘Despite the advanced capabilities of the AI platform, Merjane’s courtroom skills including advocacy, analytical and critical thinking, [and] ethical judgment…ultimately led to a victory for human intelligence.’
In short, when it comes to being in a live courtroom setting (even if a mock one), where you have to appeal to a judge’s sensibilities and their appreciation of how to apply the law in a very specific real-world matter, then humans – for now – go above and beyond where AI can get to at present.
However, Merjane commented that despite the clear advantages of being human when it came to a trial: ‘I am very open to incorporating an AI-powered trial platform into my legal practice. NexLaw’s ability to quickly process and analyse large amounts of data would significantly boost our litigation capabilities.’
Meanwhile, Leung commented: ‘The mock trial at SXSW Sydney was a fantastic experience. It highlighted the potential of AI in assisting legal processes, but also underscored the irreplaceable value of human judgment and expertise. The event provided us with valuable insights into how AI can be integrated into legal practice to complement, rather than replace, human lawyers.’
And Professor Lindsay – who played the judge in the mock trial – gave a detailed analysis which contained some important points: ‘There is great potential for AI to assist with improving access to justice. But, at this stage, the immediate future will involve trained lawyers working alongside AI systems.’
‘Ms Merjane was very well-prepared and performed as a highly trained legal professional. The self-defended party, using AI, did an impressive job of constructing coherent arguments that persuasively and accurately dealt with the evidence.
‘That said, there were some technical issues with the case. In particular, the responses from the AI system did not focus as clearly on the elements needed to establish the offence. And the AI system incorrectly cited legislation.’
‘In addition, the outputs – while impressive – depended significantly on the expert prompts from trained lawyer, Ken Leung. It is unlikely that an untrained person would achieve the same quality of outputs.’
–
So, what does this all mean? For Artificial Lawyer the key takeaway here is that as AI steadily improves it should be used in the courtroom, not just before arrival in terms of case preparation. However, if it is at all possible – and that may depend on economic constraints – all parties should be represented by a real, human lawyer.
But, this site sees no problem with a litigant in person, or a lawyer representing a client, using an AI tool of whatever type to help improve their performance before a judge. That would mean having access to an electronic device in the courtroom, which of course may not be allowed in some places.
Are AI tools really up to guiding a litigant in person through a complex live hearing on their own? No. Not yet, anyway.
But, looking ahead, ethically speaking what is more of a risk to the well-being of our society: people without a lawyer, without a clue what to do, and therefore at a huge disadvantage in court; or people using AI in a hearing to help them? So, a continuing access to justice problem or a tech-based partial solution?
Moreover, and beyond A2J issues, there is no reason why a lawyer handling a commercial case should not also tap AI in the courtroom, in addition to any work they have done leading up to the case that has been driven by AI tools.
But, on the subject of judges….hmmm….that is way more tricky. Lawyers using AI to make their arguments is one thing, but judging…..for this site it feels like that should always be kept as human as possible and with very little intervention of technology. After all, that’s why we have judges – to leverage their very human attributes in that most essential, but highly fraught, task: one person judging another.
So, in short, let’s use AI to help litigants and their lawyers to improve access to justice, but let’s leave judgment to the humans alone. And, yes, let’s work toward integrating AI into many other aspects of the justice process as well.
—
[Photo: the team involved in the project in Sydney. ]