Causality-Driven Techniques for Hate Speech Detection Across Multiple Platforms

Guardado en:
Detalles Bibliográficos
Publicado en:ProQuest Dissertations and Theses (2025)
Autor principal: Sheth, Paras
Publicado:
ProQuest Dissertations & Theses
Materias:
Acceso en línea:Citation/Abstract
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:Social media enables global collaboration and diverse reach. However, with all the boons of social media this proliferation also brings adversities. For instance, in this era of social media, hate speech has widespread across multiple online platforms, causing substantial harm by fueling discrimination, social division, and real-world violence. Hate speech refers to any form of expression that maligns or threatens individuals or groups based on inherent characteristics such as race, religion, gender, or orientation. Due to the societal harm inflicted by hate speech, there is a dire need to detect and curb such content. However, a core challenge lies in the diversity of hate speech across platforms: each online community exhibits distinct vocabularies, norms, and forms of expression, meaning that a detection model trained on one platform often fails to generalize to others. Even within the same platform, shifts in user behaviors and new hate targets can alter hate expression. Traditional classifiers, even advanced deep models, often overfit to platform-specific cues, struggling with implicit hate speech and platforms with limited labeled data. Moreover, many emerging platforms have scarce labeled data for training, heightening the need for models that can transfer knowledge and operate under such distribution shifts.In my dissertation, I address these challenges by exploring causality-driven methods to enhance generalizability in hate speech detection. Specifically, my contributions include: (1) leveraging causal cues like sentiment and aggression to learn more generalized text representations; (2) employing causal disentanglement to identify invariant latent causal factors through auxiliary variables such as hate targets; (3) developing techniques to perform causal disentanglement even with limited auxiliary supervision; and (4) analyzing hate speech from a fine-grained causal perspective, using latent counterfactual generation to improve generalization across styles. Evaluations demonstrate that these causality-driven models successfully generalize across new hate targets, diverse platforms, and varied hate speech styles.
ISBN:9798290969619
Fuente:ProQuest Dissertations & Theses Global