Legal reasoning through factor-based reasoning and argumentation in the context of explainability

Billi, Marco (2025) Legal reasoning through factor-based reasoning and argumentation in the context of explainability, [Dissertation thesis], Alma Mater Studiorum Università di Bologna. Dottorato di ricerca in Law, science and technology, 37 Ciclo. DOI 10.48676/unibo/amsdottorato/12213.
Documenti full-text disponibili:
[thumbnail of PhD_Thesis_final.pdf] Documento PDF (English) - Richiede un lettore di PDF come Xpdf o Adobe Acrobat Reader
Disponibile con Licenza: Salvo eventuali più ampie autorizzazioni dell'autore, la tesi può essere liberamente consultata e può essere effettuato il salvataggio e la stampa di una copia per fini strettamente personali di studio, di ricerca e di insegnamento, con espresso divieto di qualunque utilizzo direttamente o indirettamente commerciale. Ogni altro diritto sul materiale è riservato.
Download (5MB)

Abstract

This thesis explores methods for explaining AI decisions, with a particular focus on ensuring users’ rights to appeal such decisions. The central research question guiding this work is: \textbf{How can we ensure the right to appeal an AI-generated decision?} This question is critical for safeguarding fairness and transparency in AI-driven systems, especially in the legal domain, where the outcome can have serious implications for individuals. While the AI Act establishes a 'right to request clear and meaningful explanations' from AI systems involved in decision-making, it stops short of mandating fully transparent, white-box models. This thesis introduces methods to meet transparency requirements by combining symbolic legal models and machine learning techniques to enhance the explainability of AI-driven legal decisions. A series of experiments provide guidance on representing legally relevant factors and establishing logical connections to outcomes, starting with symbolic expert systems for EU and national law compatibility. The thesis extends to machine learning models for classifying legal judgments, balancing transparency with usability. Expert systems are shown to excel in transparency by offering step-by-step reasoning that enhances user understanding, while machine learning models improve accessibility by streamlining interaction. In conclusion, ensuring user rights in AI-driven legal contexts requires clear, comprehensible explanations of legal factors, grounded in both statutory and case law. This thesis emphasizes that AI in legal domains must support rational decision-making, aligning with legal standards and user expectations.

Abstract
Tipologia del documento
Tesi di dottorato
Autore
Billi, Marco
Supervisore
Dottorato di ricerca
Ciclo
37
Coordinatore
Settore disciplinare
Settore concorsuale
Parole chiave
explainability, logic programming, argumentation, large language models
DOI
10.48676/unibo/amsdottorato/12213
Data di discussione
10 Aprile 2025
URI

Altri metadati

Statistica sui download

Gestione del documento: Visualizza la tesi

^