Subproject 12 - Evaluation and Key Performance Indicators

Start – 2020 | Fin – 2024 Duration – 4 years

Measuring the progress of cyberjustice requires the development of reliable quality and performance indicators that are endorsed by the scientific community. Given the importance of convincing judicial actors of the benefits of cyberjustice, the methods for evaluating technologies are important, including from the perspective of their international implementation in different legal systems, such as in continental Europe and common law countries.

Subproject chief
Harold Épineuse

Research activities

Case studies

In this subproject, researchers and collaborators will carry out a series of research activities to reach the expectations of the four main categories of partners involved in the project: institutional, professional, industrial and community.

A set of case studies will investigate the potential and impacts of AI with regards to the empowerment of judicial actors, representing diverse typologies of AI applied in justice (for example AI for lawyers, AI for the judiciary, AI for police forces...) and submitted for publication in scientific and professional journals after their presentation in workshops.


An inventory of evaluation frameworks will be conducted at the beginning of the subproject and maintained all along the duration of the project to be published online and receive comments from a broader community. The inventory will be two-fold and focus on evaluation frameworks for use in the justice sector and evaluation methods and practices coming from AI.

Best Practices Guide

“Success” or “failure” stories gathered from the analysis of case-studies will be used to identify best practices for the implementation of AI in justice and to identify factors that may influence the outcome of any AI project in the field to be shared in a conference gathering to a large public of researchers and professionals.

Governance Framework

The subproject aims to provide a critical analysis of the indicators gathered in its inventory of evaluation frameworks and suggest relevant indicators for a new framework specific to the evaluation of AI in justice, including more specifically an evaluation of the contribution of AI to the autonomy of the different actors.

An original evaluation framework will be proposed and tested before its dissemination online as part of a governance framework of IA in justice based on specific performance and quality indicators in order to identify and promote a) best strategies for the successful implementation of AI in justice, b) factors that may influence the outcome of an AI project and c) quality indicators for evaluating AI services. 

The collection of key indicators resulting from the subproject activities will be integrated in a general governance framework proposed by the multidisciplinary research partnership in Working Group 3. 



Institutional partner 



Institutional partner 

Institutional partner 




Institutional partner 




Academic partner 


Academic partner 

professional partner 


This content has been updated on 8 September 2020 at 9 h 43 min.