Cognitive Computing with Large Language Models for Student Assessment Feedback

সংরক্ষণ করুন:
গ্রন্থ-পঞ্জীর বিবরন
প্রকাশিত:Big Data and Cognitive Computing vol. 9, no. 5 (2025), p. 112
প্রধান লেখক: Noorhan, Abbas
অন্যান্য লেখক: Atwell, Eric
প্রকাশিত:
MDPI AG
বিষয়গুলি:
অনলাইন ব্যবহার করুন:Citation/Abstract
Full Text
Full Text - PDF
ট্যাগগুলো: ট্যাগ যুক্ত করুন
কোনো ট্যাগ নেই, প্রথমজন হিসাবে ট্যাগ করুন!

MARC

LEADER 00000nab a2200000uu 4500
001 3211858291
003 UK-CbPIL
022 |a 2504-2289 
024 7 |a 10.3390/bdcc9050112  |2 doi 
035 |a 3211858291 
045 2 |b d20250101  |b d20251231 
100 1 |a Noorhan, Abbas 
245 1 |a Cognitive Computing with Large Language Models for Student Assessment Feedback 
260 |b MDPI AG  |c 2025 
513 |a Journal Article 
520 3 |a Effective student feedback is fundamental to enhancing learning outcomes in higher education. While traditional assessment methods emphasise both achievements and development areas, the process remains time-intensive for educators. This research explores the application of cognitive computing, specifically open-source Large Language Models (LLMs) Mistral-7B and CodeLlama-7B, to streamline feedback generation for student reports containing both Python programming elements and English narrative content. The findings indicate that these models can provide contextually appropriate feedback on both technical Python coding and English specification and documentation. They effectively identified coding weaknesses and provided constructive suggestions for improvement, as well as insightful feedback on English language quality, structure, and clarity in report writing. These results contribute to the growing body of knowledge on automated assessment feedback in higher education, offering practical insights for institutions considering the implementation of open-source LLMs in their workflows. There are around 22 thousand assessment submissions per year in the School of Computer Science, which is one of eight schools in the Faculty of Engineering and Physical Sciences, which is one of seven faculties in the University of Leeds, which is one of one hundred and sixty-six universities in the UK, so there is clear potential for our methods to scale up to millions of assessment submissions. This study also examines the limitations of current approaches and proposes potential enhancements. The findings support a hybrid system where cognitive computing manages routine tasks and educators focus on complex, personalised evaluations, enhancing feedback quality, consistency, and efficiency in educational settings. 
653 |a Pedagogy 
653 |a Higher education 
653 |a Accuracy 
653 |a Students 
653 |a Computation 
653 |a Feedback 
653 |a Physical sciences 
653 |a Task complexity 
653 |a Data science 
653 |a Automation 
653 |a Privacy 
653 |a Python 
653 |a Open source software 
653 |a Chatbots 
653 |a Coding 
653 |a Colleges & universities 
653 |a Large language models 
653 |a Infrastructure 
653 |a Report writing 
653 |a Proprietary 
653 |a Science education 
653 |a Education 
653 |a Essays 
653 |a Hybrid systems 
653 |a English language 
653 |a Learning 
700 1 |a Atwell, Eric 
773 0 |t Big Data and Cognitive Computing  |g vol. 9, no. 5 (2025), p. 112 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3211858291/abstract/embedded/H09TXR3UUZB2ISDL?source=fedsrch 
856 4 0 |3 Full Text  |u https://www.proquest.com/docview/3211858291/fulltext/embedded/H09TXR3UUZB2ISDL?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3211858291/fulltextPDF/embedded/H09TXR3UUZB2ISDL?source=fedsrch