Optimizing Cloudlets for Faster Feedback in LLM-Based Code-Evaluation Systems

Guardado en:
Detalles Bibliográficos
Publicado en:Computers vol. 14, no. 12 (2025), p. 557-571
Autor principal: Daniel-Florin, Dosaru
Otros Autores: Olteanu Alexandru-Corneliu, Țăpuș Nicolae
Publicado:
MDPI AG
Materias:
Acceso en línea:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:This paper addresses the challenge of optimizing cloudlet resource allocation in a code evaluation system. The study models the relationship between system load and response time when users submit code to an online code-evaluation platform, LambdaChecker, which operates a cloudlet-based processing pipeline. The pipeline includes code correctness checks, static analysis, and design-pattern detection using a local Large Language Model (LLM). To optimize the system, we develop a mathematical model and apply it to the LambdaChecker resource management problem. The proposed approach is evaluated using both simulations and real contest data, with a focus on improvements in average response time, resource utilization efficiency, and user satisfaction. The results indicate that adaptive scheduling and workload prediction effectively reduce waiting times without substantially increasing operational costs. Overall, the study suggests that systematic cloudlet optimization can enhance the educational value of automated code evaluation systems by improving responsiveness while preserving sustainable resource usage.
ISSN:2073-431X
DOI:10.3390/computers14120557
Fuente:Advanced Technologies & Aerospace Database