Dunn, Pietro
(2024)
Online hate speech and intermediary liability in the age of algorithmic moderation, [Dissertation thesis], Alma Mater Studiorum Università di Bologna.
Dottorato di ricerca in
Law, science and technology, 36 Ciclo. DOI 10.48676/unibo/amsdottorato/11105.
Documenti full-text disponibili:
Abstract
This research aims to investigate the impact of liability-enhancing legal strategies in the context of the governance of online hate speech. Indeed, the increased reliance of the law on the role of private platforms for the purposes of moderating and removing hate speech deeply affects constitutional principles and individual fundamental rights. For instance, the enhancement of intermediary liability and responsibilities can con-tribute to the phenomenon of the over-removal of user content, with little regard to basic constitutional guarantees. Furthermore, research has shown that the ever-increasing use of automated systems for hate speech moderation gives rise to a whole new set of challenges and issues related to the concrete risk of errors and biased results, leading to a disproportionate removal of content produced by minority, vulnerable, or discriminated groups of people. After dealing with the question concerning the ra-tionale(s) of hate speech regulation and arguing for an increased role for the principle of substantive equality in this regard, this work investigates the developing trends con-cerning the imposition of forms of intermediary liability with respect to the spread of hate speech content across the Internet, keeping a close eye on the evolving European framework. In doing so, this work also explores the relationship between platforms’ content moderation practices and the promotion of fundamental rights and values – in-cluding the principle of substantive equality – especially in the light of the ever-increasing use of artificial intelligence systems for the detection and removal of hate speech. In the context of the European Union, it is held that such reflections are of ut-most importance particularly following the adoption of the Digital Services Act. In this respect, the work argues for the need for a renewed code of conduct on hate speech, with a view to further protecting constitutional values and the rights of users.
Abstract
This research aims to investigate the impact of liability-enhancing legal strategies in the context of the governance of online hate speech. Indeed, the increased reliance of the law on the role of private platforms for the purposes of moderating and removing hate speech deeply affects constitutional principles and individual fundamental rights. For instance, the enhancement of intermediary liability and responsibilities can con-tribute to the phenomenon of the over-removal of user content, with little regard to basic constitutional guarantees. Furthermore, research has shown that the ever-increasing use of automated systems for hate speech moderation gives rise to a whole new set of challenges and issues related to the concrete risk of errors and biased results, leading to a disproportionate removal of content produced by minority, vulnerable, or discriminated groups of people. After dealing with the question concerning the ra-tionale(s) of hate speech regulation and arguing for an increased role for the principle of substantive equality in this regard, this work investigates the developing trends con-cerning the imposition of forms of intermediary liability with respect to the spread of hate speech content across the Internet, keeping a close eye on the evolving European framework. In doing so, this work also explores the relationship between platforms’ content moderation practices and the promotion of fundamental rights and values – in-cluding the principle of substantive equality – especially in the light of the ever-increasing use of artificial intelligence systems for the detection and removal of hate speech. In the context of the European Union, it is held that such reflections are of ut-most importance particularly following the adoption of the Digital Services Act. In this respect, the work argues for the need for a renewed code of conduct on hate speech, with a view to further protecting constitutional values and the rights of users.
Tipologia del documento
Tesi di dottorato
Autore
Dunn, Pietro
Supervisore
Co-supervisore
Dottorato di ricerca
Ciclo
36
Coordinatore
Settore disciplinare
Settore concorsuale
Parole chiave
Hate speech, Intermediary liability, Non-discrimination, EU, Content moderation, Artificial intelligence, Platform governance, Freedom of Expression, Substantive Equality, Internet
URN:NBN
DOI
10.48676/unibo/amsdottorato/11105
Data di discussione
4 Luglio 2024
URI
Altri metadati
Tipologia del documento
Tesi di dottorato
Autore
Dunn, Pietro
Supervisore
Co-supervisore
Dottorato di ricerca
Ciclo
36
Coordinatore
Settore disciplinare
Settore concorsuale
Parole chiave
Hate speech, Intermediary liability, Non-discrimination, EU, Content moderation, Artificial intelligence, Platform governance, Freedom of Expression, Substantive Equality, Internet
URN:NBN
DOI
10.48676/unibo/amsdottorato/11105
Data di discussione
4 Luglio 2024
URI
Statistica sui download
Gestione del documento: