Abstract
Convolutional Neural Networks (CNNs) exhibit translation invariance but lack rotation invariance. In recent years, rotating encoding for CNNs becomes a mainstream approach to address this issue, but it requires a significant number of parameters and computational resources. Given that images are the primary focus of computer vision, a model called Offset Angle and Multibranch CNN (OAMC) is proposed to achieve rotation invariance. Firstly, the model detect the offset angle of the input image and rotate it back accordingly. Secondly, feed the rotated image into a multibranch CNN with no rotation encoding. Finally, Response module is used to output the optimal branch as the final prediction of the model. Notably, with a minimal parameter count of 8 k, the model achieves a best classification accuracy of 96.98% on the rotated handwritten numbers dataset. Furthermore, compared to previous research on remote sensing datasets, the model achieves up to 8% improvement in accuracy using only one-third of the parameters of existing models.
Translated title of the contribution | Design of Rotation Invariant Model Based on Image Offset Angle and Multibranch Convolutional Neural Networks |
---|---|
Original language | Chinese (Traditional) |
Pages (from-to) | 4522-4528 |
Number of pages | 7 |
Journal | Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology |
Volume | 46 |
Issue number | 12 |
DOIs | |
Publication status | Published - Dec 2024 |
Externally published | Yes |