ILP Optimized LSTM-based Autoscaling and Scheduling of Containers in Edge-cloud Environment

I tiakina i:
Ngā taipitopito rārangi puna kōrero
I whakaputaina i:Journal of Telecommunications and Information Technology no. 2 (2025), p. 56-69
Kaituhi matua: Singh, Shivan
Ētahi atu kaituhi: G, Narayan D, Mujawar, Sadaf, Hanchinamani, G S, Hiremath, P S
I whakaputaina:
Instytut Lacznosci - Panstwowy Instytut Badawczy (National Institute of Telecommunications)
Ngā marau:
Urunga tuihono:Citation/Abstract
Full Text
Full Text - PDF
Ngā Tūtohu: Tāpirihia he Tūtohu
Kāore He Tūtohu, Me noho koe te mea tuatahi ki te tūtohu i tēnei pūkete!
Whakaahuatanga
Whakarāpopotonga:Edge computing is a decentralized computing paradigm that brings computation and data storage closer to data sources, enabling faster processing and reduced latency. This approach is critical for real-time applications, but it introduces significant challenges in managing resources efficiently in edge-cloud environments. Issues such as increased response times, inefficient autoscaling, and suboptimal task scheduling arise due to the dynamic and resource-constrained nature of edge nodes. Kubernetes, a widely used container orchestration platform, provides basic autoscaling and scheduling mechanisms, but its default configurations often fail to meet the stringent performance requirements of edge environments, especially in lightweight implementations like KubeEdge. This work presents an ILP-optimized, LSTM-based approach for autoscaling and scheduling in edge-cloud environments. The LSTM model forecasts resource demands using both real-time and historical data, enabling proactive resource allocation, while the integer linear programming (ILP) framework optimally assigns workloads and scales containers to meet predicted demands. By jointly addressing auto-scaling and scheduling challenges, the proposed method improves response time and resource utilization. The experimental setup is built on a KubeEdge testbed deployed across 11 nodes (1 cloud node and 10 edge nodes). Experimental results show that the ILP-enhanced framework achieves a 12.34°7c reduction in response time and a 7.85°7c increase in throughput compared to the LSTM-only approach.
ISSN:1509-4553
1899-8852
DOI:10.26636/jtit.2025.2.2088
Puna:Advanced Technologies & Aerospace Database