All Articles

All Articles

Abstract illustration of a cream-colored human head with glowing red-orange circuits on a maroon background, reflecting AI complexity in justice themes.
Abstract illustration of a cream-colored human head with glowing red-orange circuits on a maroon background, reflecting AI complexity in justice themes.
Abstract illustration of a cream-colored human head with glowing red-orange circuits on a maroon background, reflecting AI complexity in justice themes.

The Challenges of Artificial Intelligence (AI) in the Field of Justice

Jun 16, 2025

Artificial Intelligence is transforming Justice, offering capabilities as well as risks for rights and judicial judgment.

The rapid advancement of technology and the continuously increasing pace of modern life necessitate the use of new technological tools so that the modern social individual can manage quickly and effectively the enormous volume of information received daily. In the context of the administration of justice, the use of AI tools is expanding rapidly, offering possibilities for improving efficiency, while at the same time raising concerns regarding the distortion of judicial judgment.

Initially, the well-known term “Artificial Intelligence,” according to the European Union and the OECD, refers to software capable of generating content, predictions, recommendations, or decisions that influence the environment with which it interacts. The use of the widely known application ChatGPT has clearly facilitated the work of the administration of Justice to a great extent, as this application, through its capability to conduct dialogue, can process, classify, and interpret complex legal data provided by the user and generate largely accurate content. Furthermore, this application has the ability to self-train, continuously improving its provided services and perfecting its content. Specifically, “Artificial Intelligence” offers significant assistance in the field of Justice, as it can, within fractions of a second, conduct thorough legal research, analyze complex legal information, formulate predictions regarding the outcome of an ongoing procedure, and assist judicial officers in the decision-making process. In fact, in the United States, in the majority of criminal cases, and especially regarding pretrial detention or sentencing of the defendant, tools such as COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) are used, which automatically generate the judicial outcome. At this point, the reasonable question arises as to whether “Artificial Intelligence” could lead to an “Artificial Justice,” in other words, whether it could largely replace human contribution with machines based on perfected algorithms. It is a fact that artificial intelligence has the potential to radically transform the judicial process, but it will never be able to replace the human-centered role of Justice. This is because the formation of judicial judgment is a multifaceted process carried out by the Judge and must remain uninfluenced even when faced with disruptive technological challenges.

Specifically, judicial judgment is a process of evaluating evidence, applying legal rules, and weighing moral considerations. It is based on human interpretation, intuition, experience, and emotional intelligence. The human-centered role of judicial judgment focuses on the individual—not only the defendant and the victim but also the citizen as a member of society in general. Therefore, the Judge does not merely serve the application of rules but seeks justice based on the principle of proportionality, placing the human being at the center as a bearer of rights and as a moral personality. Consequently, the revolutionary incursion of AI must be examined with a critical perspective and approached with caution; otherwise, the systematization of judicial judgment will be achieved, with serious consequences for the safeguarding of citizens' rights.

At the European level, the EU has adopted a pioneering legal framework for Artificial Intelligence (AI), known as the Artificial Intelligence Act (AI Act), which entered into force on August 1, 2024. This Regulation constitutes the first international attempt at a comprehensive regulation of AI. It primarily establishes a framework based on a risk-based approach and encourages Member States to ensure the proper use of AI tools. Moreover, the European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe has also adopted the “European Ethical Charter on the use of AI in judicial systems,” which sets out principles such as non-discrimination, transparency, and human oversight in the use of AI in justice.

Indeed, the Court of Justice of the European Union (CJEU), even before the above-mentioned regulations came into effect, had already developed case law on AI, primarily based on the rules of the GDPR, the Charter of Fundamental Rights of the European Union (CFR), and international conventions. Perhaps the most significant decision of the Court of Justice of the European Union (CJEU) regarding Artificial Intelligence (AI) and automated decision-making was issued on December 7, 2023, in Case C-634/21 (OQ v. Land Hessen, SCHUFA Holding AG). This decision constitutes a landmark in the interpretation of Article 22 of the General Data Protection Regulation (GDPR), which concerns decisions made solely by automated processing, including profiling, that produce legal effects or significantly affect the data subject.

For the first time, judicial authorities proceeded to define the scope of AI application and clearly stated that the final say in decisions must lie with the human being. The case concerned SCHUFA, a German credit rating agency that provides credit scores to banks and other institutions through algorithmic processing of personal data. A citizen, whose loan application was rejected due to a low score, requested access to the data and the methods used to calculate his score. SCHUFA refused to disclose details, citing commercial secrecy.

Upon examination of the case, the CJEU ruled that the generation of a credit score through automated processing, which decisively influences a third party’s decision (e.g., a bank) to grant or deny credit, constitutes automated decision-making under Article 22 of the GDPR—which aims to protect individuals from decisions made solely by automated means without human intervention. The Court also emphasized that the organization cannot be exempted from its obligations under the GDPR and that citizens have the right to receive meaningful information regarding the logic involved in automated processing, even when commercial secrets are at stake.

Therefore, the CJEU gave precedence to human intervention, stressed that automated decision-making processes pose risks to the private rights of citizens, and clearly demonstrated that the judiciary does not blindly accept the outcomes produced by machines.

On the other hand, the case law of the Greek judiciary regarding the application of AI remains still limited, without any specific legislation in place; however, alignment with the AI Act and the establishment of a National AI Authority are foreseen. The main legal frameworks currently used to address AI-related issues at the national level remain the GDPR law, the Penal Code, Consumer Protection Legislation, and Law 4727/2020 on Digital Governance. Finally, a significant step forward in this direction was made with the drafting of the National AI Strategy in 2021, which promotes ethical principles and transparency in the operation of modern technological tools.

In any case, however, both at the European and national level, a specialized regulatory framework for AI must be adopted—one that will continuously evolve and balance innovation with the protection of fundamental rights. The decisions of the CJEU and initiatives such as the AI Act already constitute significant steps in this direction.