Improving Tutoring and Evaluation of Software Testing in the Classroom

Guardado en:
Detalles Bibliográficos
Publicado en:ProQuest Dissertations and Theses (2025)
Autor principal: Perretta, James
Publicado:
ProQuest Dissertations & Theses
Materias:
Acceso en línea:Citation/Abstract
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:Software testing is both a critical part of software development and a topic still under-emphasized in computer science curricula. Automated grading systems have greatly improved the quality and frequency of feedback that students can receive on their programming assignments. Students are often expected to write tests for their programming projects, but state-of-the-art automated grading systems do not consider how to grade the quality of students’ tests. While instructors might manually grade student test cases, such an approach does not scale up to large class sizes and is similarly challenging to scale out across courses in a curriculum. Most existing automated approaches for grading student tests rely on traditional branch coverage metrics, which have repeatedly been shown not to correlate with test suite quality.Our goal is to lower the human effort required to develop and deploy programming assignments that emphasize software testing while also providing higher-quality testing-focused feedback to students. We aim to do so by leveraging state-of-the-art software test quality measurements and novel automated feedback techniques, developing tools and techniques that make it easier for instructors to evaluate student test suite quality and provide tutoring to students. We focus particularly on mutation testing, which approximates the ability of a test suite to find bugs in an implementation. To support the thesis, we make three contributions. We begin with a study that examines current instructor perspectives and practices around teaching and evaluating software testing. Second, we examine using automatically-generated mutants as a substitute for instructor-written mutants (written manually) and as a substitute for real faults in student programs such as in an “all-pairs” grading approach. Finally, we examine the effects of using instructor-written hints as actionable automated feedback on student test suite quality.
ISBN:9798314854204
Fuente:ProQuest Dissertations & Theses Global