GenAI Uses and Challenges in Society: A Macro-Level Analysis

Guardat en:
Dades bibliogràfiques
Publicat a:International Journal of Technology, Knowledge and Society vol. 22, no. 1 (2025), p. 99-125
Autor principal: Aminee, Ajmal
Altres autors: Taylor, Joseph
Publicat:
Common Ground Research Networks
Matèries:
Accés en línia:Citation/Abstract
Full Text - PDF
Etiquetes: Afegir etiqueta
Sense etiquetes, Sigues el primer a etiquetar aquest registre!

MARC

LEADER 00000nab a2200000uu 4500
001 3267484658
003 UK-CbPIL
022 |a 1832-3669 
024 7 |a 10.18848/1832-3669/CGP/v22i01/99-124  |2 doi 
035 |a 3267484658 
045 2 |b d20250101  |b d20251231 
100 1 |a Aminee, Ajmal 
245 1 |a GenAI Uses and Challenges in Society: A Macro-Level Analysis 
260 |b Common Ground Research Networks  |c 2025 
513 |a Journal Article 
520 3 |a Generative Artificial Intelligence (GenAI) is transforming professional, educational, and societal domains, yet its adoption remains uneven, particularly among marginalized and underrepresented groups. This study addresses two core questions: (1) How does GenAI adoption differ across social and demographic groups, and what role does digital literacy play? (2) How can culturally responsive and explainable AI design foster trust, accessibility, and ethical use? Using a mixed-method survey of 542 participants, the study evaluated two hypotheses: first, that adoption rates are lower among marginalized groups due to limited infrastructure and digital literacy; and second, that culturally adaptive, transparent AI systems increase trust and equitable usage. Findings support both hypotheses. Participants aged 26 to 35 with higher education levels and digital fluency were more likely to use GenAI for professional purposes such as data analysis and content generation. In contrast, individuals from lower-income or less-educated backgrounds reported limited access, lower confidence, and heightened ethical concerns. Statistical analysis confirmed a strong correlation between digital literacy and trust in GenAI, while perceived accessibility significantly predicted usage. Ethical and cultural concerns—especially in healthcare, education, and public-sector contexts—emphasized the importance of transparency, explainability, and bias mitigation. This study underscores the urgent need for inclusive GenAI strategies that prioritize equitable access, digital literacy development, and culturally sensitive, explainable design. The findings offer practical insights for global policymakers, developers, and educators working toward responsible and inclusive GenAI integration. 
653 |a Transparency 
653 |a Accessibility 
653 |a Internet 
653 |a Digital literacy 
653 |a Moral education 
653 |a Higher education 
653 |a Marginality 
653 |a Health services 
653 |a Generative artificial intelligence 
653 |a Cultural sensitivity 
653 |a Fluency 
653 |a Policy making 
653 |a Ethics 
653 |a Adoption 
653 |a Explainable artificial intelligence 
653 |a Statistical analysis 
653 |a Teachers 
653 |a Mitigation 
653 |a Access 
653 |a Infrastructure 
653 |a Artificial intelligence 
653 |a Trust 
653 |a Data analysis 
653 |a Health care 
653 |a Hypotheses 
653 |a Education 
653 |a Minority groups 
653 |a Low income groups 
653 |a Quantitative analysis 
653 |a Social groups 
653 |a Educational attainment 
700 1 |a Taylor, Joseph 
773 0 |t International Journal of Technology, Knowledge and Society  |g vol. 22, no. 1 (2025), p. 99-125 
786 0 |d ProQuest  |t ABI/INFORM Global 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3267484658/abstract/embedded/J7RWLIQ9I3C9JK51?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3267484658/fulltextPDF/embedded/J7RWLIQ9I3C9JK51?source=fedsrch