Deep learning and explainable artificial intelligence in brain tumor segmentation and classification from MRI

Xie, Yuting (2024) Deep learning and explainable artificial intelligence in brain tumor segmentation and classification from MRI, [Dissertation thesis], Alma Mater Studiorum Università di Bologna. Dottorato di ricerca in Scienze biomediche e neuromotorie, 36 Ciclo.
Documenti full-text disponibili:
[img] Documento PDF (English) - Accesso riservato fino a 14 Maggio 2027 - Richiede un lettore di PDF come Xpdf o Adobe Acrobat Reader
Disponibile con Licenza: Salvo eventuali più ampie autorizzazioni dell'autore, la tesi può essere liberamente consultata e può essere effettuato il salvataggio e la stampa di una copia per fini strettamente personali di studio, di ricerca e di insegnamento, con espresso divieto di qualunque utilizzo direttamente o indirettamente commerciale. Ogni altro diritto sul materiale è riservato.
Download (10MB) | Contatta l'autore

Abstract

Convolutional neural networks (CNNs), a powerful subset of deep learning techniques, have demonstrated remarkable performance in segmenting and classifying brain tumors from medical images. However, the translation of these techniques into clinical applications encounters critical challenges. This thesis starts by presenting a comprehensive review of the CNN methods in brain tumor classification, to identify key challenges hindering the clinical application of CNNs for brain tumor diagnosis. Eighty-three relevant articles were identified using a predefined, systematic procedure. For each, data was extracted regarding training data, target problem, network architecture, validation methods, and reported quantitative performance criteria. The clinical relevance of the studies is then evaluated to identify limitations considering the merits of CNNs and possible directions for future research. I develop an Interpretable Multi-part Attention Network (IMPA-Net) for brain tumor classification to enhance the interpretability and trustworthiness. The model provides global explanation and local explanation. Global explanation is represented as a group of feature patterns the model learns to distinguish HGG and LGG classes. Local explanation interprets the reasoning process of an individual prediction based on the pre-learned task-related features. Experiments demonstrate that 86% of feature patterns were assessed to be valid for representing task-relevant features. The model shows a classification accuracy of 92.12%, among which 81.17% were evaluated to be trustworthy based on local explanations and can be used as a decision aid for glioma classification. Finally, I explore the transferability and applicability of pre-trained DeepMedic and nnUNet on brain tumor segmentation from three datasets (UCSF-PDGM dataset, BraTS 2017 training dataset, and the private OUH dataset) without further fine-tuning. The models were well trained on 3D MRI data from the BraTS 20201. Results demonstrated the pre-trained DeepMedic and nnUNet can generalize well to new, unseen datasets in the same domain as the pre-training dataset.

Abstract
Tipologia del documento
Tesi di dottorato
Autore
Xie, Yuting
Supervisore
Co-supervisore
Dottorato di ricerca
Ciclo
36
Coordinatore
Settore disciplinare
Settore concorsuale
Parole chiave
Deep learning, convolutional neural network, brain tumor classification, magnetic resonance imaging, interpretability, trustworthiness, multi-part attention, global explanation, local explanation, pre-trained, transferability, brain tumor segmentation
URN:NBN
Data di discussione
20 Giugno 2024
URI

Altri metadati

Gestione del documento: Visualizza la tesi

^