TY - JOUR
T1 - Compound Learning-Based Model Predictive Control Approach for Ducted-Fan Aerial Vehicles
AU - Manzoor, Tayyab
AU - Pei, Hailong
AU - Xia, Yuanqing
AU - Sun, Zhongqi
AU - Ali, Yasir
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Designing an efficient learning-based model predictive control (MPC) framework for ducted-fan unmanned aerial vehicles (DFUAVs) is a difficult task due to several factors involving uncertain dynamics, coupled motion, and unorthodox aerodynamic configuration. Existing control techniques are either developed from largely known physics-informed models or are made for specific goals. In this regard, this article proposes a compound learning-based MPC approach for DFUAVs to construct a suitable framework that exhibits efficient dynamics learning capability with adequate disturbance rejection characteristics. At the start, a nominal model from a largely unknown DFUAV model is achieved offline through sparse identification. Afterward, a reinforcement learning (RL) mechanism is deployed online to learn a policy to facilitate the initial guesses for the control input sequence. Thereafter, an MPC-driven optimization problem is developed, where the obtained nominal (learned) system is updated by the real system, yielding improved computational efficiency for the overall control framework. Under appropriate assumptions, stability and recursive feasibility are compactly ensured. Finally, a comparative study is conducted to illustrate the efficacy of the designed scheme.
AB - Designing an efficient learning-based model predictive control (MPC) framework for ducted-fan unmanned aerial vehicles (DFUAVs) is a difficult task due to several factors involving uncertain dynamics, coupled motion, and unorthodox aerodynamic configuration. Existing control techniques are either developed from largely known physics-informed models or are made for specific goals. In this regard, this article proposes a compound learning-based MPC approach for DFUAVs to construct a suitable framework that exhibits efficient dynamics learning capability with adequate disturbance rejection characteristics. At the start, a nominal model from a largely unknown DFUAV model is achieved offline through sparse identification. Afterward, a reinforcement learning (RL) mechanism is deployed online to learn a policy to facilitate the initial guesses for the control input sequence. Thereafter, an MPC-driven optimization problem is developed, where the obtained nominal (learned) system is updated by the real system, yielding improved computational efficiency for the overall control framework. Under appropriate assumptions, stability and recursive feasibility are compactly ensured. Finally, a comparative study is conducted to illustrate the efficacy of the designed scheme.
KW - Aerial robots
KW - machine learning (ML)
KW - model predictive control (MPC)
KW - reinforcement learning (RL)
KW - unmanned aerial vehicles (UAVs)
UR - http://www.scopus.com/inward/record.url?scp=85210926611&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2024.3422401
DO - 10.1109/TNNLS.2024.3422401
M3 - Article
C2 - 39012740
AN - SCOPUS:85210926611
SN - 2162-237X
VL - 36
SP - 9395
EP - 9407
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 5
ER -