“Turning right”? An experimental study on the political value shift in large language models

Guardado en:
Bibliografiske detaljer
Udgivet i:Humanities & Social Sciences Communications vol. 12, no. 1 (Dec 2025), p. 179
Udgivet:
Springer Nature B.V.
Fag:
Online adgang:Citation/Abstract
Full Text
Full Text - PDF
Tags: Tilføj Tag
Ingen Tags, Vær først til at tagge denne postø!

MARC

LEADER 00000nab a2200000uu 4500
001 3165258100
003 UK-CbPIL
022 |a 2662-9992 
022 |a 2055-1045 
024 7 |a 10.1057/s41599-025-04465-z  |2 doi 
035 |a 3165258100 
045 2 |b d20251201  |b d20251231 
245 1 |a “Turning right”? An experimental study on the political value shift in large language models 
260 |b Springer Nature B.V.  |c Dec 2025 
513 |a Journal Article 
520 3 |a Constructing artificial intelligence that aligns with human values is a crucial challenge, with political values playing a distinctive role among various human value systems. In this study, we adapted the Political Compass Test and combined it with rigorous bootstrapping techniques to create a standardized method for testing political values in AI. This approach was applied to multiple versions of ChatGPT, utilizing a dataset of over 3000 tests to ensure robustness. Our findings reveal that while newer versions of ChatGPT consistently maintain values within the libertarian-left quadrant, there is a statistically significant rightward shift in political values over time, a phenomenon we term a ‘value shift’ in large language models. This shift is particularly noteworthy given the widespread use of LLMs and their potential influence on societal values. Importantly, our study controlled for factors such as user interaction and language, and the observed shifts were not directly linked to changes in training datasets. While this research provides valuable insights into the dynamic nature of value alignment in AI, it also underscores limitations, including the challenge of isolating all external variables that may contribute to these shifts. These findings suggest a need for continuous monitoring of AI systems to ensure ethical value alignment, particularly as they increasingly integrate into human decision-making and knowledge systems. 
610 4 |a OpenAI 
653 |a Data analysis 
653 |a Data collection 
653 |a Research methodology 
653 |a Algorithms 
653 |a Ideology 
653 |a Experiments 
653 |a Large language models 
653 |a Chatbots 
653 |a Questionnaires 
653 |a Political parties 
653 |a Social norms 
653 |a Bias 
653 |a Ethics 
653 |a Bootstrapping 
653 |a Politics 
653 |a Robustness 
653 |a Libertarianism 
653 |a Models 
653 |a Values 
653 |a Decision making 
653 |a Artificial intelligence 
653 |a Alignment 
653 |a Humans 
653 |a Human-computer interaction 
653 |a Language attitudes 
653 |a Language shift 
653 |a Bootstrap method 
653 |a Language modeling 
653 |a Language 
653 |a Tests 
773 0 |t Humanities & Social Sciences Communications  |g vol. 12, no. 1 (Dec 2025), p. 179 
786 0 |d ProQuest  |t Social Science Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3165258100/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text  |u https://www.proquest.com/docview/3165258100/fulltext/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3165258100/fulltextPDF/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch