ILP Optimized LSTM-based Autoscaling and Scheduling of Containers in Edge-cloud Environment

Đã lưu trong:
Chi tiết về thư mục
Xuất bản năm:Journal of Telecommunications and Information Technology no. 2 (2025), p. 56-69
Tác giả chính: Singh, Shivan
Tác giả khác: G, Narayan D, Mujawar, Sadaf, Hanchinamani, G S, Hiremath, P S
Được phát hành:
Instytut Lacznosci - Panstwowy Instytut Badawczy (National Institute of Telecommunications)
Những chủ đề:
Truy cập trực tuyến:Citation/Abstract
Full Text
Full Text - PDF
Các nhãn: Thêm thẻ
Không có thẻ, Là người đầu tiên thẻ bản ghi này!

MARC

LEADER 00000nab a2200000uu 4500
001 3228539650
003 UK-CbPIL
022 |a 1509-4553 
022 |a 1899-8852 
024 7 |a 10.26636/jtit.2025.2.2088  |2 doi 
035 |a 3228539650 
045 2 |b d20250401  |b d20250630 
084 |a 249928  |2 nlm 
100 1 |a Singh, Shivan  |u KLE Technological University, Hubballi, Karnataka, India 
245 1 |a ILP Optimized LSTM-based Autoscaling and Scheduling of Containers in Edge-cloud Environment 
260 |b Instytut Lacznosci - Panstwowy Instytut Badawczy (National Institute of Telecommunications)  |c 2025 
513 |a Journal Article 
520 3 |a Edge computing is a decentralized computing paradigm that brings computation and data storage closer to data sources, enabling faster processing and reduced latency. This approach is critical for real-time applications, but it introduces significant challenges in managing resources efficiently in edge-cloud environments. Issues such as increased response times, inefficient autoscaling, and suboptimal task scheduling arise due to the dynamic and resource-constrained nature of edge nodes. Kubernetes, a widely used container orchestration platform, provides basic autoscaling and scheduling mechanisms, but its default configurations often fail to meet the stringent performance requirements of edge environments, especially in lightweight implementations like KubeEdge. This work presents an ILP-optimized, LSTM-based approach for autoscaling and scheduling in edge-cloud environments. The LSTM model forecasts resource demands using both real-time and historical data, enabling proactive resource allocation, while the integer linear programming (ILP) framework optimally assigns workloads and scales containers to meet predicted demands. By jointly addressing auto-scaling and scheduling challenges, the proposed method improves response time and resource utilization. The experimental setup is built on a KubeEdge testbed deployed across 11 nodes (1 cloud node and 10 edge nodes). Experimental results show that the ILP-enhanced framework achieves a 12.34°7c reduction in response time and a 7.85°7c increase in throughput compared to the LSTM-only approach. 
653 |a Smart cities 
653 |a Scheduling 
653 |a Linear programming 
653 |a Containers 
653 |a Metadata 
653 |a Task scheduling 
653 |a Response time 
653 |a Edge computing 
653 |a Integer programming 
653 |a Communication 
653 |a Optimization 
653 |a Real time 
653 |a Decision making 
653 |a Resource allocation 
653 |a Nodes 
653 |a Data storage 
653 |a Algorithms 
653 |a Automation 
653 |a Resource utilization 
653 |a Workloads 
700 1 |a G, Narayan D  |u KLE Technological University, Hubballi, Karnataka, India 
700 1 |a Mujawar, Sadaf  |u KLE Technological University, Hubballi, Karnataka, India 
700 1 |a Hanchinamani, G S  |u KLE Technological University, Hubballi, Karnataka, India 
700 1 |a Hiremath, P S  |u KLE Technological University, Hubballi, Karnataka, India 
773 0 |t Journal of Telecommunications and Information Technology  |g no. 2 (2025), p. 56-69 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3228539650/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text  |u https://www.proquest.com/docview/3228539650/fulltext/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3228539650/fulltextPDF/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch