Session | ||
PSG. 18-3: Justice and Court Administration : Artificial Intelligence in the Judiciary I
| ||
Presentations | ||
Semi-automated or semi-independent? Unpacking human oversight of AI-supported judicial decision-making University of Amsterdam, Netherlands, The As high-risk systems under the forthcoming EU AI Act, algorithmic systems used to support judicial decision-making must enable effective human oversight. Broadly speaking, this means that the humans-in-the-loop of such systems must be able to exercise agency over the decision-making process by correcting or interpreting algorithmic outputs in a way that prevents or mitigates harm. But what it means to effectively oversee algorithmic systems used by institutions responsible for safeguarding democracy, the rule of law, and fundamental rights remains largely undefined. This article deconstructs the concept of human oversight in the judicial context and surfaces its underlying aims and assumptions. Drawing from legal and empirical research on human-machine interaction, it explores what it would mean for humans to oversee algorithmic decision support systems in service of judicial values, and how to assess whether this is achieved in practice. Part I describes the overlapping roles that humans-in-the-loop of judicial decision support systems are explicitly or implicitly expected to play, and the challenges that this ambiguity creates. Part II discusses who is tasked with overseeing these systems and to what extent they can be relied on to perform judicial oversight roles. Part III asks when in an algorithmically supported judicial decision-making process human oversight is expected to take place – in other words: what constitutes “the loop,” and is it wide enough? The article concludes with a reflection on the conditions that would need to be fulfilled for humans-in-the-loop of decision support systems to live up to the normative expectations associated with human oversight in the judicial context. Artificial Intelligence as a Response of Passive State Masaryk University, Czech Republic The proposed article deals with an analysis based on a convergence of two observations. Firstly, there is a considerable delegation of an (institutional) power towards a private entities. This might be seen as a quite natural process foreshadowed by the Liberal conception of a “passive state”, that is primarily and mostly inactive and whose activities have been delegated to private entities[1]. This conception has now been taken further from its Liberal state theory roots and generally points to phenomena where state is inactive in areas where it ought to be.[2] A great example, important to the presented analysis, is that of (various forms of) Online Dispute Resolution. While based on the Alternative Dispute Resolution approach in general[3], which itself could be a ground for the following analysis, ODR took the basic premise of resolving disputes in an out-of-court manner and made it more efficient and more available by utilizing modern technology.[4] It thus allowed a broad public to resolve their issues - often a perceived harm to their rights, in a manner that substituted the states’ institution (court) by a private body. This phenomena has only grown further to, for example the discussion on the broad and narrow “modern” definition of ODR, that could potentially include even state-backed online courts (in the broad sense)[5], to furthering the private entities powers by, not only allowing them, but directly mandating them to provide a dispute resolution mechanism in certain cases, such as various online platform-based disputes.[6] Thus it might seem that state has fully embraced its passive role and chose not to solve the grave issue of over-burdened judicial system[7] but rather fully mandate private entities to do so. The popularity of this solution, attested to not only its wide utilisation but also by the widening legal mandate shows that there exists a significant portion of disputes that the state is unable to solve. The second, converging point, is the technological boom surrounding artificial intelligence and its capabilities of automating certain (human) activities. While we may not think of states’ institutions as being early adopters, this was not necessarily the case when it came to adoption of networking and electronic communications for example.[8] However, when it comes to the use of such substantial technology, as automation by means of AI, that does not merely facilitates the process (such as ODR platforms or electronic communication in eGov), a discussion on the impact on fundamental rights quickly springs up,[9] such discussion is even more pressing when an issue with possibly grave impacts, such as automated (legal) decision making is considered.[10] Another issue, often discussed in connection to automating legal proceedings, and one to which this submission is aiming to contribute, is that of choosing such a field of decision making that is good fit for automation.[11] Being mindful of the presented premises, the submission will firstly argue, that by utilizing automation and suitable use of AI, judiciary could be more effective, meaning that the state could provide protection even in such cases where it is currently outsourcing its actions on private actors. Further the article will argue that using automation in such areas where there is currently no effective state remedy available should allow us more leeway in following all the necessary due process and other rights, often up for discussion when dealing with automated justice and lastly, the submission will present a methodology for finding such areas. [1] Shamir, H. The State of Care: Rethinking the Distributive Effects of Familial Care Policies in Liberal Welfare States. The American Journal of Comparative Law, Vol. 58, No. 4 [2] For example the states’ duty to provide an effective remedy (under ECHR) or in general the duties to which the state is bound by the social contract. [3] E. Katsh. ODR: A Look at History. In: Wahab, Mohamed S. Abdel et al. Online Dispute Resolution: Theory and Practice: A Treatise on Technology and Dispute Resolution. The Hague:Eleven International Publishing, 2012. ISBN 978-94-90947-25-5 [4] Katsh E., Rifkin, J. Online Dispute Resolution: Resolving Conflicts in Cyberspace [5] Kaufmann-Kohler, G., Schultz, T. Online Dispute Resolution: Challenges for Contemporary Justice [6] See the current Digital Services Act, or see arguments in A. Krištofík, P. Loutocký. ODR and Online courts: What is their future after the AI Act. Jusletter IT. Die Zeitschrift für IT und Recht. Bern: Weblaw, 2024, vol. 27, no. 1, pp. 197 - 204. [7] Frost, B. Overburdened Courts and Underutilized Information Technology - A Modern Prescription for a Chronic Disorder. Judicature, vol. 68, no. 1, pp. 30-36 [8] F. Martin-Bariteau, T. Scassa. Artificial Intelligence and Law in Canada. LexisNexis, 2021. [9] See for example the recently passed AI Act, or the works by AI HLEG leading to it. [10] See for example D. Reiling, Courts and Artificial Intelligence. International Journal for Court Administration, vol. 11, no. 2. [11] Ibid. AI at Courts Oberlandesgericht Köln (Higher Regional Court of Cologne), Germany The author will report on the use of artificial intelligence and legal tech in the courts of the Cologne Higher Regional Court district. A particular focus will be on the ThinkTank for Artificial Intelligence (AI) based at the Cologne Higher Regional Court. The ThinkTank is intended to be a creative source of inspiration and a central point of contact for projects dealing with the digital transformation of the judiciary. Its aim is to find new ways of working together to shape future projects in the justice system. The think tank is intended to be a catalyst for new ideas and identify collaboration potential. In addition to collaborating and participating in existing projects, such as the creation of an online civil law procedure and a digital legal application office, it will also contribute its own ideas and initiate pilot and research projects. |