Weighted double deep Q-network based reinforcement learning for bi-objective multi-workflow scheduling in the cloud

Huifang Li*, Jianghang Huang, Binyang Wang, Yushun Fan

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

22 引用 (Scopus)

摘要

As a promising distributed paradigm, cloud computing provides a cost-effective deploying environment for hosting scientific applications due to its provisioning elastic, heterogeneous resources in a pay-per-use model. More and more applications modeled as workflows are being moved to the cloud, and time and cost become important for workflow execution. However, scheduling workflows is still a challenge due to their large-scale and complexity, as well as the cloud’s dynamic characteristics and different quotations. In this work, we propose a Weighted Double Deep Q-Network-based Reinforcement Learning algorithm (WDDQN-RL) for scheduling multiple workflows to obtain near-optimal solutions in a relatively short time with both makespan and cost minimized. Specifically, we first introduce a dynamic coefficient-based adaptive balancing method into WDDQN to improve the accuracy of the target value estimation by making a trade-off between Deep Q-Network (DQN) overestimation and Double Deep Q-Network (DDQN) underestimation. Second, pointer network-based agents and a two-level scheduling strategy are designed, where pointer networks are used to process a variable candidate task set in the first-level and one selected task is fed to agents in the second-level for allocating resources. Third, we present a dynamic sensing mechanism by adjusting the model’s attention to each individual objective for increasing the diversity of solutions while guaranteeing their quality. Experimental results show that our algorithm outperforms the benchmarking approaches in various indicators.

源语言英语
页(从-至)751-768
页数18
期刊Cluster Computing
25
2
DOI
出版状态已出版 - 4月 2022

指纹

探究 'Weighted double deep Q-network based reinforcement learning for bi-objective multi-workflow scheduling in the cloud' 的科研主题。它们共同构成独一无二的指纹。

引用此