Optimizing Cloudlets for Faster Feedback in LLM-Based Code-Evaluation Systems

Salvato in:
Dettagli Bibliografici
Pubblicato in:Computers vol. 14, no. 12 (2025), p. 557-571
Autore principale: Daniel-Florin, Dosaru
Altri autori: Olteanu Alexandru-Corneliu, Țăpuș Nicolae
Pubblicazione:
MDPI AG
Soggetti:
Accesso online:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Tags: Aggiungi Tag
Nessun Tag, puoi essere il primo ad aggiungerne!!
Descrizione
Abstract:This paper addresses the challenge of optimizing cloudlet resource allocation in a code evaluation system. The study models the relationship between system load and response time when users submit code to an online code-evaluation platform, LambdaChecker, which operates a cloudlet-based processing pipeline. The pipeline includes code correctness checks, static analysis, and design-pattern detection using a local Large Language Model (LLM). To optimize the system, we develop a mathematical model and apply it to the LambdaChecker resource management problem. The proposed approach is evaluated using both simulations and real contest data, with a focus on improvements in average response time, resource utilization efficiency, and user satisfaction. The results indicate that adaptive scheduling and workload prediction effectively reduce waiting times without substantially increasing operational costs. Overall, the study suggests that systematic cloudlet optimization can enhance the educational value of automated code evaluation systems by improving responsiveness while preserving sustainable resource usage.
ISSN:2073-431X
DOI:10.3390/computers14120557
Fonte:Advanced Technologies & Aerospace Database