MARC

LEADER 00000nab a2200000uu 4500
001 3157265560
003 UK-CbPIL
022 |a 0264-0473 
022 |a 1758-616X 
024 7 |a 10.1108/EL-08-2024-0244  |2 doi 
035 |a 3157265560 
045 2 |b d20250101  |b d20250228 
084 |a 36136  |2 nlm 
100 1 |a Zhou, Tao  |u School of Management, Hangzhou Dianzi University, Hangzhou, China 
245 1 |a The effect of trust on user adoption of AI-generated content 
260 |b Emerald Group Publishing Limited  |c 2025 
513 |a Journal Article 
520 3 |a PurposeThe purpose of this study is to examine the effect of trust on user adoption of artificial intelligence-generated content (AIGC) based on the stimulus–organism–response.Design/methodology/approachThe authors conducted an online survey in China, which is a highly competitive AI market, and obtained 504 valid responses. Both structural equation modelling and fuzzy-set qualitative comparative analysis (fsQCA) were used to conduct data analysis.FindingsThe results indicated that perceived intelligence, perceived transparency and knowledge hallucination influence cognitive trust in platform, whereas perceived empathy influences affective trust in platform. Both cognitive trust and affective trust in platform lead to trust in AIGC. Algorithm bias negatively moderates the effect of cognitive trust in platform on trust in AIGC. The fsQCA identified three configurations leading to adoption intention.Research limitations/implicationsThe main limitation is that more factors such as culture need to be included to examine their possible effects on trust. The implication is that generative AI platforms need to improve the intelligence, transparency and empathy, and mitigate knowledge hallucination to engender users’ trust in AIGC and facilitate their adoption.Originality/valueExisting research has mainly used technology adoption theories such as unified theory of acceptance and use of technology to examine AIGC user behaviour and has seldom examined user trust development in the AIGC context. This research tries to fill the gap by disclosing the mechanism underlying AIGC user trust formation. 
651 4 |a China 
653 |a User behavior 
653 |a Anxiety 
653 |a Competitive advantage 
653 |a Social networks 
653 |a Generative artificial intelligence 
653 |a Fuzzy sets 
653 |a Privacy 
653 |a Influence 
653 |a Chatbots 
653 |a Natural language 
653 |a COVID-19 
653 |a Data analysis 
653 |a Qualitative analysis 
653 |a Algorithms 
653 |a False information 
653 |a Large language models 
653 |a Transparency 
653 |a Sociocultural factors 
653 |a Research 
653 |a Technology 
653 |a Adoption of innovations 
653 |a Cognition 
653 |a Artificial intelligence 
653 |a Comparative analysis 
653 |a Cultural factors 
653 |a Responses 
653 |a Structural equation modeling 
653 |a Empathy 
653 |a Cognitive bias 
653 |a Trust 
653 |a Stimulus 
653 |a Linguistic Input 
653 |a Literature Reviews 
653 |a Student Characteristics 
653 |a Science Education 
653 |a Influence of Technology 
653 |a Environmental Influences 
653 |a Intention 
653 |a Natural Language Processing 
653 |a Grounded Theory 
653 |a Science Instruction 
653 |a Social Influences 
653 |a Language Processing 
700 1 |a Lu, Hailin  |u School of Management, Hangzhou Dianzi University, Hangzhou, China 
773 0 |t The Electronic Library  |g vol. 43, no. 1 (2025), p. 61-76 
786 0 |d ProQuest  |t Library Science Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3157265560/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full Text  |u https://www.proquest.com/docview/3157265560/fulltext/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3157265560/fulltextPDF/embedded/6A8EOT78XXH2IG52?source=fedsrch