Energy-Efficient Cloud Computing Through Reinforcement Learning-Based Workload Scheduling

Αποθηκεύτηκε σε:
Λεπτομέρειες βιβλιογραφικής εγγραφής
Εκδόθηκε σε:International Journal of Advanced Computer Science and Applications vol. 16, no. 4 (2025)
Κύριος συγγραφέας: PDF
Έκδοση:
Science and Information (SAI) Organization Limited
Θέματα:
Διαθέσιμο Online:Citation/Abstract
Full Text - PDF
Ετικέτες: Προσθήκη ετικέτας
Δεν υπάρχουν, Καταχωρήστε ετικέτα πρώτοι!

MARC

LEADER 00000nab a2200000uu 4500
001 3206239795
003 UK-CbPIL
022 |a 2158-107X 
022 |a 2156-5570 
024 7 |a 10.14569/IJACSA.2025.0160464  |2 doi 
035 |a 3206239795 
045 2 |b d20250101  |b d20251231 
100 1 |a PDF 
245 1 |a Energy-Efficient Cloud Computing Through Reinforcement Learning-Based Workload Scheduling 
260 |b Science and Information (SAI) Organization Limited  |c 2025 
513 |a Journal Article 
520 3 |a The basis for current digital infrastructure is cloud computing, which allows for scalable, on-demand computational resource access. Data center power consumption, however, has skyrocketed because of demand increases, raising operating costs and their footprint. Traditional workload scheduling algorithms often assign performance and cost priority over energy efficiency. This paper proposes a workload scheduling method utilizing deep reinforcement learning (DRL) that adjusts dynamically according to present cloud situations to ensure optimal energy efficiency without compromising performance. The proposed method utilizes Deep Q-Networks (DQN) to perform feature engineering to identify key workload parameters such as execution time, CPU and memory consumption, and subsequently schedules tasks smartly based on these results. Based on evaluation output, the model brings down the latency to 15 ms and throughput up to 500 tasks/sec with 92% efficiency in load balancing, 95% resource usage, and 97% QoS. The proposed approach yields improved performance in terms of key parameters compared to conventional approaches such as Round Robin, FCFS, and heuristic methods. These findings show how reinforcement learning can significantly enhance the scalability, reliability, and sustainability of cloud environments. Future work will focus on enhancing fault tolerance, incorporating federated learning for decentralized optimization, and testing the model on real-world multi-cloud infrastructures. 
653 |a Parameter identification 
653 |a Task scheduling 
653 |a Cloud computing 
653 |a Fault tolerance 
653 |a Optimization 
653 |a Workload 
653 |a Algorithms 
653 |a Deep learning 
653 |a Federated learning 
653 |a Workloads 
653 |a Heuristic methods 
653 |a Scheduling 
653 |a Computer centers 
653 |a Computers 
653 |a Computer science 
653 |a Computer engineering 
653 |a Energy efficiency 
653 |a Heuristic 
653 |a Energy consumption 
773 0 |t International Journal of Advanced Computer Science and Applications  |g vol. 16, no. 4 (2025) 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3206239795/abstract/embedded/75I98GEZK8WCJMPQ?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3206239795/fulltextPDF/embedded/75I98GEZK8WCJMPQ?source=fedsrch