Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping Review

Guardat en:
Dades bibliogràfiques
Publicat a:Journal of Medical Internet Research vol. 27 (2025), p. e60269
Autor principal: Sasseville, Maxime
Altres autors: Ouellet, Steven, Rhéaume, Caroline, Malek Sahlia, Couture, Vincent, Després, Philippe, Paquette, Jean-Sébastien, Darmon, David, Bergeron, Frédéric, Gagnon, Marie-Pierre
Publicat:
Gunther Eysenbach MD MPH, Associate Professor
Matèries:
Accés en línia:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetes: Afegir etiqueta
Sense etiquetes, Sigues el primer a etiquetar aquest registre!

MARC

LEADER 00000nab a2200000uu 4500
001 3222368140
003 UK-CbPIL
022 |a 1438-8871 
024 7 |a 10.2196/60269  |2 doi 
035 |a 3222368140 
045 2 |b d20250101  |b d20251231 
100 1 |a Sasseville, Maxime 
245 1 |a Bias Mitigation in Primary Health Care Artificial Intelligence Models: Scoping Review 
260 |b Gunther Eysenbach MD MPH, Associate Professor  |c 2025 
513 |a Journal Article 
520 3 |a Background:Artificial intelligence (AI) predictive models in primary health care have the potential to enhance population health by rapidly and accurately identifying individuals who should receive care and health services. However, these models also carry the risk of perpetuating or amplifying existing biases toward diverse groups. We identified a gap in the current understanding of strategies used to assess and mitigate bias in primary health care algorithms related to individuals’ personal or protected attributes.Objective:This study aimed to describe the attempts, strategies, and methods used to mitigate bias in AI models within primary health care, to identify the diverse groups or protected attributes considered, and to evaluate the results of these approaches on both bias reduction and AI model performance.Methods:We conducted a scoping review following Joanna Briggs Institute (JBI) guidelines, searching Medline (Ovid), CINAHL (EBSCO), PsycINFO (Ovid), and Web of Science databases for studies published between January 1, 2017, and November 15, 2022. Pairs of reviewers independently screened titles and abstracts, applied selection criteria, and performed full-text screening. Discrepancies regarding study inclusion were resolved by consensus. Following reporting standards for AI in health care, we extracted data on study objectives, model features, targeted diverse groups, mitigation strategies used, and results. Using the mixed methods appraisal tool, we appraised the quality of the studies.Results:After removing 585 duplicates, we screened 1018 titles and abstracts. From the remaining 189 full-text articles, we included 17 studies. The most frequently investigated protected attributes were race (or ethnicity), examined in 12 of the 17 studies, and sex (often identified as gender), typically classified as “male versus female” in 10 of the studies. We categorized bias mitigation approaches into four clusters: (1) modifying existing AI models or datasets, (2) sourcing data from electronic health records, (3) developing tools with a “human-in-the-loop” approach, and (4) identifying ethical principles for informed decision-making. Algorithmic preprocessing methods, such as relabeling and reweighing data, along with natural language processing techniques that extract data from unstructured notes, showed the greatest potential for bias mitigation. Other methods aimed at enhancing model fairness included group recalibration and the application of the equalized odds metric. However, these approaches sometimes exacerbated prediction errors across groups or led to overall model miscalibrations.Conclusions:The results suggest that biases toward diverse groups are more easily mitigated when data are open-sourced, multiple stakeholders are engaged, and during the algorithm’s preprocessing stage. Further empirical studies that include a broader range of groups, such as Indigenous peoples in Canada, are needed to validate and expand upon these findings.Trial Registration:OSF Registry osf.io/9ngz5/; https://osf.io/9ngz5/International Registered Report Identifier (IRRID):RR2-10.2196/46684 
610 4 |a Joanna Briggs Institute 
653 |a Databases 
653 |a Datasets 
653 |a Computer science 
653 |a Race 
653 |a Primary care 
653 |a Appraisal 
653 |a Mitigation 
653 |a Health services 
653 |a Bias 
653 |a Prediction models 
653 |a Artificial intelligence 
653 |a Algorithms 
653 |a Ethnicity 
653 |a Inclusion 
653 |a Citation indexes 
653 |a Clinical decision making 
653 |a Selection criteria 
653 |a Indigenous peoples 
653 |a Discrepancies 
653 |a Computerized medical records 
653 |a Attributes 
653 |a Health records 
653 |a Ethics 
653 |a Errors 
653 |a Groups 
653 |a Abstracts 
653 |a Medical records 
653 |a Strategies 
653 |a Registration 
653 |a Data 
653 |a Data processing 
653 |a Titles 
653 |a Decision making 
653 |a Natural language processing 
653 |a Health care 
700 1 |a Ouellet, Steven 
700 1 |a Rhéaume, Caroline 
700 1 |a Malek Sahlia 
700 1 |a Couture, Vincent 
700 1 |a Després, Philippe 
700 1 |a Paquette, Jean-Sébastien 
700 1 |a Darmon, David 
700 1 |a Bergeron, Frédéric 
700 1 |a Gagnon, Marie-Pierre 
773 0 |t Journal of Medical Internet Research  |g vol. 27 (2025), p. e60269 
786 0 |d ProQuest  |t Library Science Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3222368140/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3222368140/fulltextwithgraphics/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3222368140/fulltextPDF/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch