An Advanced Reinforcement Learning Framework for Online Scheduling of Deferrable Workloads in Cloud Computing

Guardado en:
Detalles Bibliográficos
Publicado en:arXiv.org (Jun 3, 2024), p. n/a
Autor principal: Dong, Hang
Otros Autores: Zhu, Liwen, Zhao, Shan, Qiao, Bo, Yang, Fangkai, Qin, Si, Luo, Chuan, Lin, Qingwei, Yang, Yuwen, Virdi, Gurpreet, Rajmohan, Saravan, Zhang, Dongmei, Moscibroda, Thomas
Publicado:
Cornell University Library, arXiv.org
Materias:
Acceso en línea:Citation/Abstract
Full text outside of ProQuest
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 3064389478
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3064389478 
045 0 |b d20240603 
100 1 |a Dong, Hang 
245 1 |a An Advanced Reinforcement Learning Framework for Online Scheduling of Deferrable Workloads in Cloud Computing 
260 |b Cornell University Library, arXiv.org  |c Jun 3, 2024 
513 |a Working Paper 
520 3 |a Efficient resource utilization and perfect user experience usually conflict with each other in cloud computing platforms. Great efforts have been invested in increasing resource utilization but trying not to affect users' experience for cloud computing platforms. In order to better utilize the remaining pieces of computing resources spread over the whole platform, deferrable jobs are provided with a discounted price to users. For this type of deferrable jobs, users are allowed to submit jobs that will run for a specific uninterrupted duration in a flexible range of time in the future with a great discount. With these deferrable jobs to be scheduled under the remaining capacity after deploying those on-demand jobs, it remains a challenge to achieve high resource utilization and meanwhile shorten the waiting time for users as much as possible in an online manner. In this paper, we propose an online deferrable job scheduling method called \textit{Online Scheduling for DEferrable jobs in Cloud} (\OSDEC{}), where a deep reinforcement learning model is adopted to learn the scheduling policy, and several auxiliary tasks are utilized to provide better state representations and improve the performance of the model. With the integrated reinforcement learning framework, the proposed method can well plan the deployment schedule and achieve a short waiting time for users while maintaining a high resource utilization for the platform. The proposed method is validated on a public dataset and shows superior performance. 
653 |a Scheduling 
653 |a User experience 
653 |a Performance enhancement 
653 |a Task scheduling 
653 |a Resource utilization 
653 |a Computer aided scheduling 
653 |a Deep learning 
653 |a Cloud computing 
700 1 |a Zhu, Liwen 
700 1 |a Zhao, Shan 
700 1 |a Qiao, Bo 
700 1 |a Yang, Fangkai 
700 1 |a Qin, Si 
700 1 |a Luo, Chuan 
700 1 |a Lin, Qingwei 
700 1 |a Yang, Yuwen 
700 1 |a Virdi, Gurpreet 
700 1 |a Rajmohan, Saravan 
700 1 |a Zhang, Dongmei 
700 1 |a Moscibroda, Thomas 
773 0 |t arXiv.org  |g (Jun 3, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3064389478/abstract/embedded/J7RWLIQ9I3C9JK51?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2406.01047