With the Estonian Ministry of Justice (EMOJ) planning to pilot an AI judicial system later this year, businesses and litigants in person may benefit from greater accessibility, lower costs and quicker turnaround times. But what are the considerations of replacing human judges with AI? And what would the justice system look like in a future in which an AI judiciary becomes a reality?
AI in Estonia
Despite having a population smaller than Hamburg, Estonia has a lot to shout about when it comes to applying AI to public services. Among a number of other reported applications, one of the Estonian government’s most ambitious is the development of an ‘AI judge’. The pilot AI judge is reported to go live later this year, initially to be tested on small value contractual claims. Should it be a success, the EMOJ will no doubt look to spread its use to other types of dispute.
Can we trust AI to handle legal issues?
The collision of law and AI is nothing new and preliminary models have shown great promise of accuracy. In 2016, for example, a group of computer scientists conducted a systematic review of European human rights cases, feeding just shy of 600 cases into their model. Their AI model was able to impressively predict the outcome of whether there had been a violation of human rights with a success rate of 79%. Such studies are evident that, with an even richer and broader input of case law, AI judges could have the potential to reach accuracy levels equal to or even better than that of their human counterparts.
What happens if an AI judge makes a wrong decision?
Should an AI judge hand down the ‘wrong’ judgment, who should bear the responsibility? If the fault lies with an issue in the algorithm itself or with the inputted data set, should the developers take responsibility? Or perhaps the legal body giving their seal of approval to the AI model?
Human judges are given immunity from prosecution for acts carried out or decisions made whilst in performance of their judicial function. Notions of an incorrect judgment must be pursued through the appeals system. Logic would dictate that the same set of rules should apply to an AI judiciary in order for it to be effective and successful, rather than placing it within a legal liability model where responsibility falls on an individual’s shoulders.
Should decisions from an AI judge be taken seriously?
The authority which human judges have to pass legally binding judgments is enshrined in custom or statute and legislative reform would be necessary to provide an AI judge with the requisite jurisdiction.
Beyond the formalities however, human judges offer not only the knowledge acquired through a career of legal experience, but also human traits such as empathy, which court users seek when stood in the courtroom. A judge is a representative of society, who sits and listens as an individual’s case is argued, giving the promise of a fair trial – a fundamental human right. The replacement with an AI system could lead to a loss of sympathy in the courtroom.
Such questions skirt the important issue of trust which is placed in the civil justice system. An AI judge, in order to be successful and accepted, requires trust in the programming and monitoring of the decisions being made. Should the developers be required to have legal training? Would it be sufficient for a judge to simply ‘sign off’ on an AI model and its decisions?
Indeed, in 2018, the European Commission for the Efficiency of Justice published an ethical charter into the use of AI in judicial systems. The CEPEJ recognises that the application of AI can improve the efficiency and quality of a judiciary, but such systems must be implemented in a responsible manner which complies with citizens’ guaranteed fundamental rights. The charter accordingly sets out core principles which AI justice developments should adhere to, including, respect of fundamental rights, non-discrimination, quality and security, transparency, impartiality and fairness and the principle of “under user control”.
The future of the justice system
AI is, at least initially, well suited for matters of small value and those with simple, repetitive fact patterns. Incorporating AI into the judicial system could have a profound effect on the character and very nature of civil disputes.
By way of example, barristers attempt to estimate the outcome of a case by analysing the merits of a matter, and such predictions usually come caveated that the legal system is ‘unpredictable’. Very rarely would a lawyer tell their client that their case is a ‘slam-dunk’. Like all technologies, once made commonplace, AI programming could be vulnerable to copy. If replica AI judges were available for public use, parties to a claim could reliably test the waters of their case long before stepping into a courtroom and with far greater accuracy. With the incorporation of an AI judiciary, we could see a lot more cases settle out of court as parties have a greater and more reliable indication as to the merits of their case.
By Jonathan Edwards, Bird & Bird