TY - GEN
T1 - Application and Interpretable Research of Capsule Network in Situational Understanding
AU - Li, Peizhang
AU - Fei, Qing
AU - Chen, Zhen
AU - Ru, Jiyuan
N1 - Publisher Copyright:
© 2024 Technical Committee on Control Theory, Chinese Association of Automation.
PY - 2024
Y1 - 2024
N2 - In the context of multi-agent collaborative adversarial scenarios, the accurate and rapid assessment of situations is a crucial prerequisite for unmanned clusters to achieve autonomous decision-making. Leveraging deep learning techniques, multi-agent systems can achieve precise understanding of complex situations. However, the inherently non-interpretable black-box structure of deep learning makes it challenging to apply in domains with stringent security requirements. In this paper, we propose a threat situation classification network based on Capsule Networks to categorize different scenario situations, and conduct a comprehensive analysis of the interpretability of this network. The network introduces a novel convolutional 'Flatten Layer' to ensure that feature capsules are distributed within planes that maintain the same relative spatial relationships as the input image. This establishes the characteristic plane matrix heatmaps and the characteristic volume matrix heatmaps, which, along with the coupling coefficient matrix heatmaps, collectively demonstrate the network's sparse interpretability during the classification process. Experimental results show that the proposed network can effectively accomplish situation classification tasks while maintaining interpretability, providing insights for research in situation understanding in domains with high-security requirements.
AB - In the context of multi-agent collaborative adversarial scenarios, the accurate and rapid assessment of situations is a crucial prerequisite for unmanned clusters to achieve autonomous decision-making. Leveraging deep learning techniques, multi-agent systems can achieve precise understanding of complex situations. However, the inherently non-interpretable black-box structure of deep learning makes it challenging to apply in domains with stringent security requirements. In this paper, we propose a threat situation classification network based on Capsule Networks to categorize different scenario situations, and conduct a comprehensive analysis of the interpretability of this network. The network introduces a novel convolutional 'Flatten Layer' to ensure that feature capsules are distributed within planes that maintain the same relative spatial relationships as the input image. This establishes the characteristic plane matrix heatmaps and the characteristic volume matrix heatmaps, which, along with the coupling coefficient matrix heatmaps, collectively demonstrate the network's sparse interpretability during the classification process. Experimental results show that the proposed network can effectively accomplish situation classification tasks while maintaining interpretability, providing insights for research in situation understanding in domains with high-security requirements.
KW - Capsule Network
KW - interpretability analysis
KW - threat situation classification
UR - http://www.scopus.com/inward/record.url?scp=85205462226&partnerID=8YFLogxK
U2 - 10.23919/CCC63176.2024.10661727
DO - 10.23919/CCC63176.2024.10661727
M3 - Conference contribution
AN - SCOPUS:85205462226
T3 - Chinese Control Conference, CCC
SP - 8679
EP - 8684
BT - Proceedings of the 43rd Chinese Control Conference, CCC 2024
A2 - Na, Jing
A2 - Sun, Jian
PB - IEEE Computer Society
T2 - 43rd Chinese Control Conference, CCC 2024
Y2 - 28 July 2024 through 31 July 2024
ER -