TY - JOUR
T1 - Task Offloading Algorithm Based on Preference Weight Adaptive Multi-objective Reinforcement Learning for Cloud-Assisted Mobile Edge Computing in Internet of Vehicles
AU - Zou, Yuan
AU - Wu, Jinming
AU - Zhang, Xudong
AU - Liu, Jiahui
AU - Sun, Wenjing
N1 - Publisher Copyright:
© China Society of Automotive Engineers (China SAE) 2026.
PY - 2026
Y1 - 2026
N2 - As the Internet of Things proliferates, cloud-assisted mobile edge computing (MEC) enables intelligent connected vehicles to efficiently offload their computationally intensive tasks to servers within the Internet of Vehicles (IoV), facilitating services like collaborative perception, augmented reality, and cooperative navigation. The dynamic and uncertain nature of cloud-assisted MEC environments poses challenges to the generalization and robustness of offloading algorithms, as they must adapt to changing conditions. This work explores the compromise between delay and energy consumption, formulating the task offloading problem as a multi-objective optimization (MOO) problem. Given the time-varying and uncertain nature of the cloud-assisted MEC system, deep reinforcement learning emerges as a potent solution for this computation offloading issue. Most existing multi-objective reinforcement learning-based methods typically draw inspiration from multi-objective optimization based on decomposition, making the application and extension of policies inconvenient. To address this issue, the established MOO problem is reformulated as a multi-objective Markov decision process and proposed an innovative end-to-end method based on weight adaptive Dueling Double DQN, named WA-D3QN. Specifically, it amalgamates the strengths of Dueling Double DQN and Preference-Conditioned Multi-objective Reinforce Learning while also leveraging deep residual neural networks (ResNet) and multi-head attention (MHA) mechanism to enhance convergence. Comprehensive simulations demonstrated that WA-D3QN surpasses existing baselines, including a 23.63% increase in the hypervolume of the Pareto front compared to the Nondominated Sorting Genetic Algorithm II.
AB - As the Internet of Things proliferates, cloud-assisted mobile edge computing (MEC) enables intelligent connected vehicles to efficiently offload their computationally intensive tasks to servers within the Internet of Vehicles (IoV), facilitating services like collaborative perception, augmented reality, and cooperative navigation. The dynamic and uncertain nature of cloud-assisted MEC environments poses challenges to the generalization and robustness of offloading algorithms, as they must adapt to changing conditions. This work explores the compromise between delay and energy consumption, formulating the task offloading problem as a multi-objective optimization (MOO) problem. Given the time-varying and uncertain nature of the cloud-assisted MEC system, deep reinforcement learning emerges as a potent solution for this computation offloading issue. Most existing multi-objective reinforcement learning-based methods typically draw inspiration from multi-objective optimization based on decomposition, making the application and extension of policies inconvenient. To address this issue, the established MOO problem is reformulated as a multi-objective Markov decision process and proposed an innovative end-to-end method based on weight adaptive Dueling Double DQN, named WA-D3QN. Specifically, it amalgamates the strengths of Dueling Double DQN and Preference-Conditioned Multi-objective Reinforce Learning while also leveraging deep residual neural networks (ResNet) and multi-head attention (MHA) mechanism to enhance convergence. Comprehensive simulations demonstrated that WA-D3QN surpasses existing baselines, including a 23.63% increase in the hypervolume of the Pareto front compared to the Nondominated Sorting Genetic Algorithm II.
KW - Multi-head attention mechanism
KW - Multi-objective reinforcement learning
KW - Task offloading
KW - Weight adaptive
UR - https://www.scopus.com/pages/publications/105027241491
U2 - 10.1007/s42154-024-00350-8
DO - 10.1007/s42154-024-00350-8
M3 - Article
AN - SCOPUS:105027241491
SN - 2096-4250
JO - Automotive Innovation
JF - Automotive Innovation
ER -