TY - JOUR
T1 - Deep Reinforcement Learning-Based Adaptive Computation Offloading and Power Allocation in Vehicular Edge Computing Networks
AU - Qiu, Bin
AU - Wang, Yunxiao
AU - Xiao, Hailin
AU - Zhang, Zhongshan
N1 - Publisher Copyright:
© 2000-2011 IEEE.
PY - 2024
Y1 - 2024
N2 - As a novel paradigm, Vehicular Edge Computing (VEC) can effectively support computation-intensive or delay-sensitive applications in the Internet of Vehicles era. Computation offloading and resource management strategies are key technologies that directly determine the system cost in VEC networks. However, due to vehicle mobility and stochastic arrival computation tasks, designing an optimal offloading and resource allocation policy is extremely challenging. To solve this issue, a deep reinforcement learning-based intelligent offloading and power allocation scheme is proposed for minimizing the total delay cost and energy consumption in dynamic heterogeneous VEC networks. Specifically, we first construct an end-edge-cloud offloading model in a bidirectional road scenario, taking into account stochastic task arrival, time-varying channel conditions, and vehicle mobility. With the objective of minimizing the long-term total cost composed of the energy consumption and task delay, the Markov Decision Process (MDP) can be employed to solve such optimization problems. Moreover, considering the high-dimensional continuity of the action space and the dynamics of task generation, we propose a deep deterministic policy gradient-based adaptive computation offloading and power allocation (DDPG-ACOPA) algorithm to solve the formulated MDP problem. Extensive simulation results demonstrate that the proposed DDPG-ACOPA algorithm performs better in the dynamic heterogeneous VEC environment, significantly outperforming the other four baseline schemes.
AB - As a novel paradigm, Vehicular Edge Computing (VEC) can effectively support computation-intensive or delay-sensitive applications in the Internet of Vehicles era. Computation offloading and resource management strategies are key technologies that directly determine the system cost in VEC networks. However, due to vehicle mobility and stochastic arrival computation tasks, designing an optimal offloading and resource allocation policy is extremely challenging. To solve this issue, a deep reinforcement learning-based intelligent offloading and power allocation scheme is proposed for minimizing the total delay cost and energy consumption in dynamic heterogeneous VEC networks. Specifically, we first construct an end-edge-cloud offloading model in a bidirectional road scenario, taking into account stochastic task arrival, time-varying channel conditions, and vehicle mobility. With the objective of minimizing the long-term total cost composed of the energy consumption and task delay, the Markov Decision Process (MDP) can be employed to solve such optimization problems. Moreover, considering the high-dimensional continuity of the action space and the dynamics of task generation, we propose a deep deterministic policy gradient-based adaptive computation offloading and power allocation (DDPG-ACOPA) algorithm to solve the formulated MDP problem. Extensive simulation results demonstrate that the proposed DDPG-ACOPA algorithm performs better in the dynamic heterogeneous VEC environment, significantly outperforming the other four baseline schemes.
KW - Vehicular edge computing
KW - computation offloading
KW - deep deterministic policy gradient (DDPG)
KW - power allocation
UR - http://www.scopus.com/inward/record.url?scp=85192150982&partnerID=8YFLogxK
U2 - 10.1109/TITS.2024.3391831
DO - 10.1109/TITS.2024.3391831
M3 - Article
AN - SCOPUS:85192150982
SN - 1524-9050
VL - 25
SP - 13339
EP - 13349
JO - IEEE Transactions on Intelligent Transportation Systems
JF - IEEE Transactions on Intelligent Transportation Systems
IS - 10
ER -