TY - JOUR
T1 - MLAN
T2 - Multi-Level Attention Network
AU - Qin, Peinuan
AU - Wang, Qinxuan
AU - Zhang, Yue
AU - Wei, Xueyao
AU - Gao, Meiguo
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2022
Y1 - 2022
N2 - In this paper, we proposed a 'Multi-Level Attention Network' (MLAN), which defines a multi-level structure, including layer, block, and group levels to get hierarchical attention and combines corresponding residual information for better feature extraction. We also constructed a shared mask attention module (SMA) which can significantly reduce the number of parameters compared with conventional attention methods. Based on the MLAN and SMA, we further investigated a variety of information fusion modules for better feature fusion at different levels. We conducted classification task experiments based on the ResNet backbone with different depths, and the experimental results show that our method has a significant performance improvement over the backbone on CIFAR10 and CIFAR100 datasets. Meanwhile, compared with the mainstream attention methods, our MLAN performs better with higher accuracy as well as less parameters and computation complexity. We also visualized some intermediate feature maps and explained why our MLAN performs well.
AB - In this paper, we proposed a 'Multi-Level Attention Network' (MLAN), which defines a multi-level structure, including layer, block, and group levels to get hierarchical attention and combines corresponding residual information for better feature extraction. We also constructed a shared mask attention module (SMA) which can significantly reduce the number of parameters compared with conventional attention methods. Based on the MLAN and SMA, we further investigated a variety of information fusion modules for better feature fusion at different levels. We conducted classification task experiments based on the ResNet backbone with different depths, and the experimental results show that our method has a significant performance improvement over the backbone on CIFAR10 and CIFAR100 datasets. Meanwhile, compared with the mainstream attention methods, our MLAN performs better with higher accuracy as well as less parameters and computation complexity. We also visualized some intermediate feature maps and explained why our MLAN performs well.
KW - Multi-level structure
KW - hierarchical attention aggregation
KW - information fusion
KW - shared mask attention
UR - http://www.scopus.com/inward/record.url?scp=85139440537&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2022.3210711
DO - 10.1109/ACCESS.2022.3210711
M3 - Article
AN - SCOPUS:85139440537
SN - 2169-3536
VL - 10
SP - 105437
EP - 105446
JO - IEEE Access
JF - IEEE Access
ER -