Hashemi, Novin
(2025)
Enhancing AI conversational agents for effective advice-giving: exploring factors affecting user-agent interactions, [Dissertation thesis], Alma Mater Studiorum Università di Bologna.
Dottorato di ricerca in
Management, 36 Ciclo. DOI 10.48676/unibo/amsdottorato/12431.
Documenti full-text disponibili:
![Hashemi-Novin-Thesis.pdf [thumbnail of Hashemi-Novin-Thesis.pdf]](https://amsdottorato.unibo.it/style/images/fileicons/application_pdf.png) |
Documento PDF (English)
- Richiede un lettore di PDF come Xpdf o Adobe Acrobat Reader
Disponibile con Licenza: Salvo eventuali più ampie autorizzazioni dell'autore, la tesi può essere liberamente consultata e può essere effettuato il salvataggio e la stampa di una copia per fini strettamente personali di studio, di ricerca e di insegnamento, con espresso divieto di qualunque utilizzo direttamente o indirettamente commerciale. Ogni altro diritto sul materiale è riservato.
Download (2MB)
|
Abstract
As conversational agents (CAs) become more ubiquitous in digital interactions, it becomes essential to understand whether their design influences users. This study investigates the CA communication style in terms of language formality, avatar outfit formality, and advice alignment to see how these factors influence psychological reactance and intention to follow advice. Based on politeness theory, psychological reactance theory, and responsibility attribution theory, three empirical models test some key determinants affecting CA effectiveness in advice-taking situations. Model 1 examines the effect of language and avatar formality on advice adherence. Results indicate that formal language has a significant positive effect on adivce adherence, whereas the formality of the avatar outfit has no effect. Mediation tests indicate that formal language increases perceived negative politeness, which in turn has an impact on reactance and adivce adherence. In Model 2, the effect of language directness and advice alignment on adivce adherence is examined. Outcomes show that aligned advice with user’s previous preferences reduces psychological reactance, thereby increasing adivce adherence. Language directness, nonetheless, does not independently affect adivce adherence, which attests to the importance of alignment in overcoming resistance to AI suggestions. In Model 3, the focus turns to how responsibility attribution and the success or failure of an advice received from a CA impact future adoption of a chatbot. Users put less responsibility on a CA for the outcome when they received advice that is aligned with their previous preference. While the success of the dvice directly affects intention to use the CA positively. In summary, this research advances human-machine interaction literature by clarifying how verbal and nonverbal cues influence user behavior in AI-driven advice settings. Practical implications include strategies for designing persuasive, user-centered CAs that optimize engagement and adherence while managing user resistance and responsibility perceptions.
Abstract
As conversational agents (CAs) become more ubiquitous in digital interactions, it becomes essential to understand whether their design influences users. This study investigates the CA communication style in terms of language formality, avatar outfit formality, and advice alignment to see how these factors influence psychological reactance and intention to follow advice. Based on politeness theory, psychological reactance theory, and responsibility attribution theory, three empirical models test some key determinants affecting CA effectiveness in advice-taking situations. Model 1 examines the effect of language and avatar formality on advice adherence. Results indicate that formal language has a significant positive effect on adivce adherence, whereas the formality of the avatar outfit has no effect. Mediation tests indicate that formal language increases perceived negative politeness, which in turn has an impact on reactance and adivce adherence. In Model 2, the effect of language directness and advice alignment on adivce adherence is examined. Outcomes show that aligned advice with user’s previous preferences reduces psychological reactance, thereby increasing adivce adherence. Language directness, nonetheless, does not independently affect adivce adherence, which attests to the importance of alignment in overcoming resistance to AI suggestions. In Model 3, the focus turns to how responsibility attribution and the success or failure of an advice received from a CA impact future adoption of a chatbot. Users put less responsibility on a CA for the outcome when they received advice that is aligned with their previous preference. While the success of the dvice directly affects intention to use the CA positively. In summary, this research advances human-machine interaction literature by clarifying how verbal and nonverbal cues influence user behavior in AI-driven advice settings. Practical implications include strategies for designing persuasive, user-centered CAs that optimize engagement and adherence while managing user resistance and responsibility perceptions.
Tipologia del documento
Tesi di dottorato
Autore
Hashemi, Novin
Supervisore
Co-supervisore
Dottorato di ricerca
Ciclo
36
Coordinatore
Settore disciplinare
Settore concorsuale
Parole chiave
Conversational Agents (CAs), Advice Adherence, Politeness Theory, Psychological Reactance, Responsibility Attribution.
DOI
10.48676/unibo/amsdottorato/12431
Data di discussione
18 Giugno 2025
URI
Altri metadati
Tipologia del documento
Tesi di dottorato
Autore
Hashemi, Novin
Supervisore
Co-supervisore
Dottorato di ricerca
Ciclo
36
Coordinatore
Settore disciplinare
Settore concorsuale
Parole chiave
Conversational Agents (CAs), Advice Adherence, Politeness Theory, Psychological Reactance, Responsibility Attribution.
DOI
10.48676/unibo/amsdottorato/12431
Data di discussione
18 Giugno 2025
URI
Statistica sui download
Gestione del documento: