TY - JOUR
T1 - Multi-Agent Collaborative Inference via DNN Decoupling
T2 - Intermediate Feature Compression and Edge Learning
AU - Hao, Zhiwei
AU - Xu, Guanyu
AU - Luo, Yong
AU - Hu, Han
AU - An, Jianping
AU - Mao, Shiwen
N1 - Publisher Copyright:
© 2002-2012 IEEE.
PY - 2023/10/1
Y1 - 2023/10/1
N2 - Recently, deploying deep neural network (DNN) models via collaborative inference, which splits a pre-trained model into two parts and executes them on user equipment (UE) and edge server respectively, becomes attractive. However, the large intermediate feature of DNN impedes flexible decoupling, and existing approaches either focus on the single UE scenario or simply define tasks considering the required CPU cycles, but ignore the indivisibility of a single DNN layer. In this article, we study the multi-agent collaborative inference scenario, where a single edge server coordinates the inference of multiple UEs. Our goal is to achieve fast and energy-efficient inference for all UEs. To achieve this goal, we design a lightweight autoencoder-based method to compress the large intermediate feature at first. Then we define tasks according to the inference overhead of DNNs and formulate the problem as a Markov decision process (MDP). Finally, we propose a multi-agent hybrid proximal policy optimization (MAHPPO) algorithm to solve the optimization problem with a hybrid action space. We conduct extensive experiments with different types of networks, and the results show that our method can reduce up to 56% of inference latency and save up to 72% of energy consumption.
AB - Recently, deploying deep neural network (DNN) models via collaborative inference, which splits a pre-trained model into two parts and executes them on user equipment (UE) and edge server respectively, becomes attractive. However, the large intermediate feature of DNN impedes flexible decoupling, and existing approaches either focus on the single UE scenario or simply define tasks considering the required CPU cycles, but ignore the indivisibility of a single DNN layer. In this article, we study the multi-agent collaborative inference scenario, where a single edge server coordinates the inference of multiple UEs. Our goal is to achieve fast and energy-efficient inference for all UEs. To achieve this goal, we design a lightweight autoencoder-based method to compress the large intermediate feature at first. Then we define tasks according to the inference overhead of DNNs and formulate the problem as a Markov decision process (MDP). Finally, we propose a multi-agent hybrid proximal policy optimization (MAHPPO) algorithm to solve the optimization problem with a hybrid action space. We conduct extensive experiments with different types of networks, and the results show that our method can reduce up to 56% of inference latency and save up to 72% of energy consumption.
KW - Deep reinforcement learning
KW - collaborative inference
KW - hybrid action space
KW - mobile edge computing
KW - multi-user
UR - https://www.scopus.com/pages/publications/85132769851
U2 - 10.1109/TMC.2022.3183098
DO - 10.1109/TMC.2022.3183098
M3 - Article
AN - SCOPUS:85132769851
SN - 1536-1233
VL - 22
SP - 6041
EP - 6055
JO - IEEE Transactions on Mobile Computing
JF - IEEE Transactions on Mobile Computing
IS - 10
ER -