Automation Assistance for Systematic Reviewers

Guardado en:
Detalles Bibliográficos
Publicado en:ProQuest Dissertations and Theses (2025)
Autor principal: DeYoung, Jay
Publicado:
ProQuest Dissertations & Theses
Materias:
Acceso en línea:Citation/Abstract
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:How does a medical practitioner know which treatments work? How is the standard of care established, and how is it updated? One approach is to systematically review and assess published medical articles, to produce recommendations for practice and for further research. Creating such a review is a labor-intensive process: finding, organizing, and summarizing relevant medical research requires a considerable time investment. Can we, natural language practitioners, assist in creating these reviews? In this defense, I develop a demonstration toolkit and user interface designed to identify where these (imperfect) automation tools are useful now, where they may pose risks when misapplied, and what gaps exist between these technologies and what systematic reviewers want for their workflows.This thesis broadly follows the systematic review process: search (Part I), evidence extraction (Part II), producing a textual synthesis (Part III), and will culminate with a demonstration of the technology from initial search to ultimate result (Part IV). Throughout this work we will discuss measures for safety and correctness. While scoping any review is important, this work does not address deciding what (medical) problems to study, leaving those choices to domain experts.This thesis broadly follows the systematic review process: search (Part I), evidence extraction (Part II), producing a textual synthesis (Part III), and will culminate with a demonstration of the technology from initial search to ultimate result (Part IV). Throughout this work we will discuss measures for safety and correctness. While scoping any review is important, this work does not address deciding what (medical) problems to study, leaving those choices to domain experts.We begin with search assistance (Part I). In this portion I build tools to enable systematic reviewers in their initial article screening process: given a review topic, I automatically generate PubMed queries. Crafting these queries can be challenging, often requiring from a medical librarian, an expert not always available to the reviewers. I automate production of these queries, and enable interviews with systematic reviewers to gauge the usefulness and effectiveness of such a tool.We continue into evidence extraction (Part II). Given a clinical study, how do we know what interventions it considers, over which patients, and if the treatments worked? I present the Evidence Inference dataset - a dataset of randomized control trials marked with patient populations, medical interventions (and any comparison interventions), and what outcomes the study measured. In this section I develop and refine these datasets and associated modeling challenges.We step into the realm of evidence synthesis (Part III). I produce a dataset (MS2) of systematic reviews and build models to automate generating a textual synthesis. Then we study the effectiveness of standard opinion summarization and transformer models multi-document summarization (is such a summarization necessarily a synthesis?). In both cases we use the Evidence Inference dataset produced above as important evaluation measures.Finally, I conclude by building a demonstration application. In this work, I assemble these parts and accompanying models into an application to assist systematic reviewers in their workflow. I conduct an assessment of where these components do (and do not) fit into systematic reviewer workflows (Part IV), resulting in directions for future research.
ISBN:9798302165213
Fuente:ProQuest Dissertations & Theses Global