Governing algorithms in the big data era for balancing new digital rights - designing GDPR compliant and trustworthy XAI systems

Yalcin, Orhan Gazi (2023) Governing algorithms in the big data era for balancing new digital rights - designing GDPR compliant and trustworthy XAI systems, [Dissertation thesis], Alma Mater Studiorum Università di Bologna. Dottorato di ricerca in Law, science and technology, 35 Ciclo. DOI 10.48676/unibo/amsdottorato/10511.
Documenti full-text disponibili:
[img] Documento PDF (English) - Richiede un lettore di PDF come Xpdf o Adobe Acrobat Reader
Disponibile con Licenza: Creative Commons Attribution Non-commercial No Derivatives 4.0 (CC BY-NC-ND 4.0) .
Download (6MB)

Abstract

This thesis investigates the legal, ethical, technical, and psychological issues of general data processing and artificial intelligence practices and the explainability of AI systems. It consists of two main parts. In the initial section, we provide a comprehensive overview of the big data processing ecosystem and the main challenges we face today. We then evaluate the GDPR’s data privacy framework in the European Union. The Trustworthy AI Framework proposed by the EU’s High-Level Expert Group on AI (AI HLEG) is examined in detail. The ethical principles for the foundation and realization of Trustworthy AI are analyzed along with the assessment list prepared by the AI HLEG. Then, we list the main big data challenges the European researchers and institutions identified and provide a literature review on the technical and organizational measures to address these challenges. A quantitative analysis is conducted on the identified big data challenges and the measures to address them, which leads to practical recommendations for better data processing and AI practices in the EU. In the subsequent part, we concentrate on the explainability of AI systems. We clarify the terminology and list the goals aimed at the explainability of AI systems. We identify the reasons for the explainability-accuracy trade-off and how we can address it. We conduct a comparative cognitive analysis between human reasoning and machine-generated explanations with the aim of understanding how explainable AI can contribute to human reasoning. We then focus on the technical and legal responses to remedy the explainability problem. In this part, GDPR’s right to explanation framework and safeguards are analyzed in-depth with their contribution to the realization of Trustworthy AI. Then, we analyze the explanation techniques applicable at different stages of machine learning and propose several recommendations in chronological order to develop GDPR-compliant and Trustworthy XAI systems.

Abstract
Tipologia del documento
Tesi di dottorato
Autore
Yalcin, Orhan Gazi
Supervisore
Co-supervisore
Dottorato di ricerca
Ciclo
35
Coordinatore
Settore disciplinare
Settore concorsuale
Parole chiave
Explainable Artificial Intelligence, XAI, explainability, the right to explanation, Trustworthy AI, GDPR, digital ethics, black-box systems
URN:NBN
DOI
10.48676/unibo/amsdottorato/10511
Data di discussione
27 Marzo 2023
URI

Altri metadati

Statistica sui download

Gestione del documento: Visualizza la tesi

^