Abstract
Unmanned Aerial Vehicle (UAV)-aided target tracking has been applied to many practical scenarios such as search and rescue missions. Edge computing has emerged as a promising solution to improve the performance of UAV-aided target tracking and to facilitate computational offloading from the UAV to the edge nodes (ENs). However, due to the mobility of UAVs as well as the limited energy and coverage of ENs, the design of offloading policies remains challenging. To address these challenges, this paper studies the problem of UAVs tasks allocation by jointly considering which EN to execute an arriving video task and how to adjust the transmit power of the UAV to achieve successful tracking. The problem is modeled as a Markov Decision Process (MDP), which attempts to balance energy cost and time cost. We propose a Q-learning based approach to solve this problem. Numerical simulation results demonstrate that compared with baseline methods, our algorithm can achieve significant improvement.
Original language | English |
---|---|
Pages (from-to) | 123-130 |
Number of pages | 8 |
Journal | Computer Communications |
Volume | 201 |
DOIs | |
Publication status | Published - 1 Mar 2023 |
Externally published | Yes |
Keywords
- MDP
- Q-learning
- Task allocation
- UAV target tracking