Stereotyping in Language (Technologies): An Examination of Racial and Gender Stereotypes in Natural Language and Language Models

Guardado en:
Bibliografiske detaljer
Udgivet i:ProQuest Dissertations and Theses (2025)
Hovedforfatter: Lee, Hojun
Udgivet:
ProQuest Dissertations & Theses
Fag:
Online adgang:Citation/Abstract
Full Text - PDF
Tags: Tilføj Tag
Ingen Tags, Vær først til at tagge denne postø!

MARC

LEADER 00000nab a2200000uu 4500
001 3192037404
003 UK-CbPIL
020 |a 9798310392960 
035 |a 3192037404 
045 2 |b d20250101  |b d20251231 
084 |a 66569  |2 nlm 
100 1 |a Lee, Hojun 
245 1 |a Stereotyping in Language (Technologies): An Examination of Racial and Gender Stereotypes in Natural Language and Language Models 
260 |b ProQuest Dissertations & Theses  |c 2025 
513 |a Dissertation/Thesis 
520 3 |a This dissertation examines stereotyping across natural language and language technologies through three interconnected studies. The first chapter applies a contemporary model of race relations from social psychology to investigate America's racial framework within American English, revealing how language encodes hierarchical associations between racial/ethnic groups and attributes of superiority and Americanness. The second chapter extends this analysis to Large Language Models (LLMs), finding that these language technologies portray socially subordinate groups as more homogeneous compared to dominant groups. The third chapter investigates stereotyping in Vision Language Models (VLMs), showing that these language technologies generate more uniform representations for women than men and for White Americans than Black Americans, with uniformity increasing for more gender-prototypical appearances. This research demonstrates how stereotypes persist across natural language and AI systems, with many biases in language technologies mirroring established patterns of human social cognition. By integrating the fields of natural language processing and social sciences through interdisciplinary research, this dissertation documents bias patterns in language technologies and demonstrates how social psychological theories provide valuable tools for detecting and measuring stereotyping of AI systems. 
653 |a Computer science 
653 |a Social psychology 
653 |a Political science 
653 |a Artificial intelligence 
773 0 |t ProQuest Dissertations and Theses  |g (2025) 
786 0 |d ProQuest  |t ProQuest Dissertations & Theses Global 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3192037404/abstract/embedded/IZYTEZ3DIR4FRXA2?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3192037404/fulltextPDF/embedded/IZYTEZ3DIR4FRXA2?source=fedsrch