Causality-Driven Techniques for Hate Speech Detection Across Multiple Platforms

Guardado en:
Bibliografiske detaljer
Udgivet i:ProQuest Dissertations and Theses (2025)
Hovedforfatter: Sheth, Paras
Udgivet:
ProQuest Dissertations & Theses
Fag:
Online adgang:Citation/Abstract
Full Text - PDF
Tags: Tilføj Tag
Ingen Tags, Vær først til at tagge denne postø!

MARC

LEADER 00000nab a2200000uu 4500
001 3240604237
003 UK-CbPIL
020 |a 9798290969619 
035 |a 3240604237 
045 2 |b d20250101  |b d20251231 
084 |a 66569  |2 nlm 
100 1 |a Sheth, Paras 
245 1 |a Causality-Driven Techniques for Hate Speech Detection Across Multiple Platforms 
260 |b ProQuest Dissertations & Theses  |c 2025 
513 |a Dissertation/Thesis 
520 3 |a Social media enables global collaboration and diverse reach. However, with all the boons of social media this proliferation also brings adversities. For instance, in this era of social media, hate speech has widespread across multiple online platforms, causing substantial harm by fueling discrimination, social division, and real-world violence. Hate speech refers to any form of expression that maligns or threatens individuals or groups based on inherent characteristics such as race, religion, gender, or orientation. Due to the societal harm inflicted by hate speech, there is a dire need to detect and curb such content. However, a core challenge lies in the diversity of hate speech across platforms: each online community exhibits distinct vocabularies, norms, and forms of expression, meaning that a detection model trained on one platform often fails to generalize to others. Even within the same platform, shifts in user behaviors and new hate targets can alter hate expression. Traditional classifiers, even advanced deep models, often overfit to platform-specific cues, struggling with implicit hate speech and platforms with limited labeled data. Moreover, many emerging platforms have scarce labeled data for training, heightening the need for models that can transfer knowledge and operate under such distribution shifts.In my dissertation, I address these challenges by exploring causality-driven methods to enhance generalizability in hate speech detection. Specifically, my contributions include: (1) leveraging causal cues like sentiment and aggression to learn more generalized text representations; (2) employing causal disentanglement to identify invariant latent causal factors through auxiliary variables such as hate targets; (3) developing techniques to perform causal disentanglement even with limited auxiliary supervision; and (4) analyzing hate speech from a fine-grained causal perspective, using latent counterfactual generation to improve generalization across styles. Evaluations demonstrate that these causality-driven models successfully generalize across new hate targets, diverse platforms, and varied hate speech styles. 
653 |a Computer science 
653 |a Computer engineering 
653 |a Artificial intelligence 
653 |a Information technology 
773 0 |t ProQuest Dissertations and Theses  |g (2025) 
786 0 |d ProQuest  |t ProQuest Dissertations & Theses Global 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3240604237/abstract/embedded/75I98GEZK8WCJMPQ?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3240604237/fulltextPDF/embedded/75I98GEZK8WCJMPQ?source=fedsrch