How to explain: from theory to practice

Sovrano, Francesco (2023) How to explain: from theory to practice, [Dissertation thesis], Alma Mater Studiorum Università di Bologna. Dottorato di ricerca in Data science and computation, 34 Ciclo. DOI 10.48676/unibo/amsdottorato/10943.
Documenti full-text disponibili:
[img] Documento PDF (English) - Richiede un lettore di PDF come Xpdf o Adobe Acrobat Reader
Disponibile con Licenza: Salvo eventuali più ampie autorizzazioni dell'autore, la tesi può essere liberamente consultata e può essere effettuato il salvataggio e la stampa di una copia per fini strettamente personali di studio, di ricerca e di insegnamento, con espresso divieto di qualunque utilizzo direttamente o indirettamente commerciale. Ogni altro diritto sul materiale è riservato.
Download (6MB)

Abstract

Today we live in an age where the internet and artificial intelligence allow us to search for information through impressive amounts of data, opening up revolutionary new ways to make sense of reality and understand our world. However, it is still an area of improvement to exploit the full potential of large amounts of explainable information by distilling it automatically in an intuitive and user-centred explanation. For instance, different people (or artificial agents) may search for and request different types of information in a different order, so it is unlikely that a short explanation can suffice for all needs in the most generic case. Moreover, dumping a large portion of explainable information in a one-size-fits-all representation may also be sub-optimal, as the needed information may be scarce and dispersed across hundreds of pages. The aim of this work is to investigate how to automatically generate (user-centred) explanations from heterogeneous and large collections of data, with a focus on the concept of explanation in a broad sense, as a critical artefact for intelligence, regardless of whether it is human or robotic. Our approach builds on and extends Achinstein’s philosophical theory of explanations, where explaining is an illocutionary (i.e., broad but relevant) act of usefully answering questions. Specifically, we provide the theoretical foundations of Explanatory Artificial Intelligence (YAI), formally defining a user-centred explanatory tool and the space of all possible explanations, or explanatory space, generated by it. We present empirical results in support of our theory, showcasing the implementation of YAI tools and strategies for assessing explainability. To justify and evaluate the proposed theories and models, we considered case studies at the intersection of artificial intelligence and law, particularly European legislation. Our tools helped produce better explanations of software documentation and legal texts for humans and complex regulations for reinforcement learning agents.

Abstract
Tipologia del documento
Tesi di dottorato
Autore
Sovrano, Francesco
Supervisore
Co-supervisore
Dottorato di ricerca
Ciclo
34
Coordinatore
Settore disciplinare
Settore concorsuale
Parole chiave
Explanatory Artificial Intelligence (YAI) User-centred explanations Achinstein’s philosophical theory of explanations
URN:NBN
DOI
10.48676/unibo/amsdottorato/10943
Data di discussione
16 Giugno 2023
URI

Altri metadati

Statistica sui download

Gestione del documento: Visualizza la tesi

^