TY - JOUR
T1 - Visually oriented flight and coordination of unmanned aerial vehicle swarms without explicit communication
AU - Xiong, Jing
AU - Li, Juan
AU - Li, Jie
AU - Liu, Chang
AU - Zhang, Sheng
N1 - Publisher Copyright:
© 2025 Elsevier Inc.
PY - 2026/4/15
Y1 - 2026/4/15
N2 - This paper takes inspiration from avian optomotor responses and proposes a visually oriented flight and coordination (VOFC) model for unmanned aerial vehicle (UAV) swarms without using explicit communication. The bio-inspired VOFC model addresses the problem of deploying UAV swarms to perform collaborative tasks when reliable communication channels are unavailable or compromised. The VOFC model adopts a four-layered structure, containing the visual raw inputs, the visual cues, the behavior rules, and the output action, respectively. The visual raw inputs consist of full-range projections and close-range observations, and are recombined into visual cues to reflect environmental dynamics and swarm variations. The behavior rules are summed into the output action to determine individual motion. Compared to the state-of-the-art hybrid projection model (HPM), the minimally structured stochastic model (MSSM), and the internal belief model (IBM), our model delivers superior collaborative capabilities. Performance metrics on collective motion, such as position entropies and velocity polarization values, show that the VOFC model achieves denser spatial distributions and better synchronized flights. The proposed VOFC model has made a beneficial attempt in the field of visual-based non-communication coordination techniques for UAV swarms, facilitating communication-restricted UAV applications.
AB - This paper takes inspiration from avian optomotor responses and proposes a visually oriented flight and coordination (VOFC) model for unmanned aerial vehicle (UAV) swarms without using explicit communication. The bio-inspired VOFC model addresses the problem of deploying UAV swarms to perform collaborative tasks when reliable communication channels are unavailable or compromised. The VOFC model adopts a four-layered structure, containing the visual raw inputs, the visual cues, the behavior rules, and the output action, respectively. The visual raw inputs consist of full-range projections and close-range observations, and are recombined into visual cues to reflect environmental dynamics and swarm variations. The behavior rules are summed into the output action to determine individual motion. Compared to the state-of-the-art hybrid projection model (HPM), the minimally structured stochastic model (MSSM), and the internal belief model (IBM), our model delivers superior collaborative capabilities. Performance metrics on collective motion, such as position entropies and velocity polarization values, show that the VOFC model achieves denser spatial distributions and better synchronized flights. The proposed VOFC model has made a beneficial attempt in the field of visual-based non-communication coordination techniques for UAV swarms, facilitating communication-restricted UAV applications.
KW - Avian optomotor response
KW - Collective motion
KW - Emergence
KW - Swarm
KW - Unmanned aerial vehicle (UAV)
KW - Visually oriented flight and coordination
UR - https://www.scopus.com/pages/publications/105023821383
U2 - 10.1016/j.ins.2025.122936
DO - 10.1016/j.ins.2025.122936
M3 - Article
AN - SCOPUS:105023821383
SN - 0020-0255
VL - 732
JO - Information Sciences
JF - Information Sciences
M1 - 122936
ER -