Automation Assistance for Systematic Reviewers

Guardado en:
Detalles Bibliográficos
Publicado en:ProQuest Dissertations and Theses (2025)
Autor principal: DeYoung, Jay
Publicado:
ProQuest Dissertations & Theses
Materias:
Acceso en línea:Citation/Abstract
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 3156317499
003 UK-CbPIL
020 |a 9798302165213 
035 |a 3156317499 
045 2 |b d20250101  |b d20251231 
084 |a 66569  |2 nlm 
100 1 |a DeYoung, Jay 
245 1 |a Automation Assistance for Systematic Reviewers 
260 |b ProQuest Dissertations & Theses  |c 2025 
513 |a Dissertation/Thesis 
520 3 |a How does a medical practitioner know which treatments work? How is the standard of care established, and how is it updated? One approach is to systematically review and assess published medical articles, to produce recommendations for practice and for further research. Creating such a review is a labor-intensive process: finding, organizing, and summarizing relevant medical research requires a considerable time investment. Can we, natural language practitioners, assist in creating these reviews? In this defense, I develop a demonstration toolkit and user interface designed to identify where these (imperfect) automation tools are useful now, where they may pose risks when misapplied, and what gaps exist between these technologies and what systematic reviewers want for their workflows.This thesis broadly follows the systematic review process: search (Part I), evidence extraction (Part II), producing a textual synthesis (Part III), and will culminate with a demonstration of the technology from initial search to ultimate result (Part IV). Throughout this work we will discuss measures for safety and correctness. While scoping any review is important, this work does not address deciding what (medical) problems to study, leaving those choices to domain experts.This thesis broadly follows the systematic review process: search (Part I), evidence extraction (Part II), producing a textual synthesis (Part III), and will culminate with a demonstration of the technology from initial search to ultimate result (Part IV). Throughout this work we will discuss measures for safety and correctness. While scoping any review is important, this work does not address deciding what (medical) problems to study, leaving those choices to domain experts.We begin with search assistance (Part I). In this portion I build tools to enable systematic reviewers in their initial article screening process: given a review topic, I automatically generate PubMed queries. Crafting these queries can be challenging, often requiring from a medical librarian, an expert not always available to the reviewers. I automate production of these queries, and enable interviews with systematic reviewers to gauge the usefulness and effectiveness of such a tool.We continue into evidence extraction (Part II). Given a clinical study, how do we know what interventions it considers, over which patients, and if the treatments worked? I present the Evidence Inference dataset - a dataset of randomized control trials marked with patient populations, medical interventions (and any comparison interventions), and what outcomes the study measured. In this section I develop and refine these datasets and associated modeling challenges.We step into the realm of evidence synthesis (Part III). I produce a dataset (MS2) of systematic reviews and build models to automate generating a textual synthesis. Then we study the effectiveness of standard opinion summarization and transformer models multi-document summarization (is such a summarization necessarily a synthesis?). In both cases we use the Evidence Inference dataset produced above as important evaluation measures.Finally, I conclude by building a demonstration application. In this work, I assemble these parts and accompanying models into an application to assist systematic reviewers in their workflow. I conduct an assessment of where these components do (and do not) fit into systematic reviewer workflows (Part IV), resulting in directions for future research. 
653 |a Linguistics 
653 |a Artificial intelligence 
653 |a Computer science 
653 |a Bioinformatics 
773 0 |t ProQuest Dissertations and Theses  |g (2025) 
786 0 |d ProQuest  |t ProQuest Dissertations & Theses Global 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3156317499/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3156317499/fulltextPDF/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch