Jargon and Readability in Plain Language Summaries of Health Research: Cross-Sectional Observational Study

Gardado en:
Detalles Bibliográficos
Publicado en:Journal of Medical Internet Research vol. 27 (2025), p. e50862
Autor Principal: Lang, Iain A
Outros autores: King, Angela, Boddy, Kate, Stein, Ken, Asare, Lauren, Day, Jo, Liabo, Kristin
Publicado:
Gunther Eysenbach MD MPH, Associate Professor
Materias:
Acceso en liña:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetas: Engadir etiqueta
Sen Etiquetas, Sexa o primeiro en etiquetar este rexistro!

MARC

LEADER 00000nab a2200000uu 4500
001 3222367823
003 UK-CbPIL
022 |a 1438-8871 
024 7 |a 10.2196/50862  |2 doi 
035 |a 3222367823 
045 2 |b d20250101  |b d20251231 
100 1 |a Lang, Iain A 
245 1 |a Jargon and Readability in Plain Language Summaries of Health Research: Cross-Sectional Observational Study 
260 |b Gunther Eysenbach MD MPH, Associate Professor  |c 2025 
513 |a Journal Article 
520 3 |a Background:The idea of making science more accessible to nonscientists has prompted health researchers to involve patients and the public more actively in their research. This sometimes involves writing a plain language summary (PLS), a short summary intended to make research findings accessible to nonspecialists. However, whether PLSs satisfy the basic requirements of accessible language is unclear.Objective:We aimed to assess the readability and level of jargon in the PLSs of research funded by the largest national clinical research funder in Europe, the United Kingdom’s National Institute for Health and Care Research (NIHR). We also aimed to assess whether readability and jargon were influenced by internal and external characteristics of research projects.Methods:We downloaded the PLSs of all NIHR National Journals Library reports from mid-2014 to mid-2022 (N=1241) and analyzed them using the Flesch Reading Ease (FRE) formula and a jargon calculator (the De-Jargonizer). In our analysis, we included the following study characteristics of each PLS: research topic, funding program, project size, length, publication year, and readability and jargon scores of the original funding proposal.Results:Readability scores ranged from 1.1 to 70.8, with an average FRE score of 39.0 (95% CI 38.4-39.7). Moreover, 2.8% (35/1241) of the PLSs had an FRE score classified as “plain English” or better; none had readability scores in line with the average reading age of the UK population. Jargon scores ranged from 76.4 to 99.3, with an average score of 91.7 (95% CI 91.5-91.9) and 21.7% (269/1241) of the PLSs had a jargon score suitable for general comprehension. Variables such as research topic, funding program, and project size significantly influenced readability and jargon scores. The biggest differences related to the original proposals: proposals with a PLS in their application that were in the 20% most readable were almost 3 times more likely to have a more readable final PLS (incidence rate ratio 2.88, 95% CI 1.86-4.45). Those with the 20% least jargon in the original application were more than 10 times as likely to have low levels of jargon in the final PLS (incidence rate ratio 13.87, 95% CI 5.17-37.2). There was no observable trend over time.Conclusions:Most of the PLSs published in the NIHR’s National Journals Library have poor readability due to their complexity and use of jargon. None were readable at a level in keeping with the average reading age of the UK population. There were significant variations in readability and jargon scores depending on the research topic, funding program, and other factors. Notably, the readability of the original funding proposal seemed to significantly impact the final report’s readability. Ways of improving the accessibility of PLSs are needed, as is greater clarity over who and what they are for. 
651 4 |a United States--US 
651 4 |a United Kingdom--UK 
653 |a Readability 
653 |a Terminology 
653 |a Health care 
653 |a Funding 
653 |a Averages 
653 |a Clinical research 
653 |a Medical research 
653 |a Clinical trials 
653 |a Public health 
653 |a Jargon 
653 |a Ethics 
653 |a Libraries 
653 |a Topics 
653 |a Reading 
653 |a Access 
653 |a Comprehension 
653 |a Health services 
653 |a Health research 
653 |a Scholarship 
653 |a Observational studies 
653 |a Patients 
653 |a Language 
700 1 |a King, Angela 
700 1 |a Boddy, Kate 
700 1 |a Stein, Ken 
700 1 |a Asare, Lauren 
700 1 |a Day, Jo 
700 1 |a Liabo, Kristin 
773 0 |t Journal of Medical Internet Research  |g vol. 27 (2025), p. e50862 
786 0 |d ProQuest  |t Library Science Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3222367823/abstract/embedded/IZYTEZ3DIR4FRXA2?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3222367823/fulltextwithgraphics/embedded/IZYTEZ3DIR4FRXA2?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3222367823/fulltextPDF/embedded/IZYTEZ3DIR4FRXA2?source=fedsrch