A Fine-Grained Attention Model for High Accuracy Operational Robot Guidance

Yinghao Chu, Daquan Feng*, Zuozhu Liu, Lei Zhang, Zizhou Zhao, Zhenzhong Wang, Zhiyong Feng, Xiang Gen Xia

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)

Abstract

Deep learning enhanced Internet of Things (IoT) is advancing the transformation toward smart manufacturing. Intelligent robot guidance is one of the most potential deep learning + IoT applications in the manufacturing industry. However, low costs, efficient computing, and extremely high localization accuracy are mandatory requirements for vision robot guidance, particularly in operational factories. Therefore, in this work, a low-cost edge computing-based IoT system is developed based on an innovative fine-grained attention model (FGAM). FGAM integrates a deep-learning-based attention model to detect the region of interest (ROI) and an optimized conventional computer vision model to perform fine-grained localization concentrating on the ROI. Trained with only 100 images collected from real production line, the proposed FGAM has shown superior performance over multiple benchmark models when validated using operational data. Eventually, the FGAM-based edge computing system has been deployed on a welding robot in a real-world factory for mass production. After the assembly of about 6000 products, the deployed system has achieved averaged overall process and transmission time down to 200 ms and overall localization accuracy up to 99.998%.

Original languageEnglish
Pages (from-to)1066-1081
Number of pages16
JournalIEEE Internet of Things Journal
Volume10
Issue number2
DOIs
Publication statusPublished - 15 Jan 2023
Externally publishedYes

Keywords

  • Attention mechanism
  • Internet of Things (IoT)
  • deep learning
  • edge computing
  • fine-grained image analysis
  • robot guidance
  • smart manufacturing

Fingerprint

Dive into the research topics of 'A Fine-Grained Attention Model for High Accuracy Operational Robot Guidance'. Together they form a unique fingerprint.

Cite this