Improving Tutoring and Evaluation of Software Testing in the Classroom

Furkejuvvon:
Bibliográfalaš dieđut
Publikašuvnnas:ProQuest Dissertations and Theses (2025)
Váldodahkki: Perretta, James
Almmustuhtton:
ProQuest Dissertations & Theses
Fáttát:
Liŋkkat:Citation/Abstract
Full Text - PDF
Fáddágilkorat: Lasit fáddágilkoriid
Eai fáddágilkorat, Lasit vuosttaš fáddágilkora!

MARC

LEADER 00000nab a2200000uu 4500
001 3200483971
003 UK-CbPIL
020 |a 9798314854204 
035 |a 3200483971 
045 2 |b d20250101  |b d20251231 
084 |a 66569  |2 nlm 
100 1 |a Perretta, James 
245 1 |a Improving Tutoring and Evaluation of Software Testing in the Classroom 
260 |b ProQuest Dissertations & Theses  |c 2025 
513 |a Dissertation/Thesis 
520 3 |a Software testing is both a critical part of software development and a topic still under-emphasized in computer science curricula. Automated grading systems have greatly improved the quality and frequency of feedback that students can receive on their programming assignments. Students are often expected to write tests for their programming projects, but state-of-the-art automated grading systems do not consider how to grade the quality of students’ tests. While instructors might manually grade student test cases, such an approach does not scale up to large class sizes and is similarly challenging to scale out across courses in a curriculum. Most existing automated approaches for grading student tests rely on traditional branch coverage metrics, which have repeatedly been shown not to correlate with test suite quality.Our goal is to lower the human effort required to develop and deploy programming assignments that emphasize software testing while also providing higher-quality testing-focused feedback to students. We aim to do so by leveraging state-of-the-art software test quality measurements and novel automated feedback techniques, developing tools and techniques that make it easier for instructors to evaluate student test suite quality and provide tutoring to students. We focus particularly on mutation testing, which approximates the ability of a test suite to find bugs in an implementation. To support the thesis, we make three contributions. We begin with a study that examines current instructor perspectives and practices around teaching and evaluating software testing. Second, we examine using automatically-generated mutants as a substitute for instructor-written mutants (written manually) and as a substitute for real faults in student programs such as in an “all-pairs” grading approach. Finally, we examine the effects of using instructor-written hints as actionable automated feedback on student test suite quality. 
653 |a Computer science 
653 |a Social psychology 
653 |a Computer engineering 
773 0 |t ProQuest Dissertations and Theses  |g (2025) 
786 0 |d ProQuest  |t ProQuest Dissertations & Theses Global 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3200483971/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3200483971/fulltextPDF/embedded/6A8EOT78XXH2IG52?source=fedsrch