A digital twin-driven human-machine interactive assembly method based on lightweight multi-target detection and assembly feature generation

Dinghao Cheng, Bingtao Hu*, Yixiong Feng, Jiangxin Yang, Baicun Wang, Hao Gong, Jianrong Tan

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

In the field of industrial assembly, human-machine interactive assembly methods are frequently used. Lack of virtual and physical mapping, a convoluted guiding system, and low effect precision in the interactive process, a digital twin-driven human-machine interactive assembly method system is proposed as a solution to the mentioned issues. The YOLOv7-tiny lightweight model is used to perform accurate detection of parts. By incorporating attention modules into the backbone network, the feature extraction capability of the model in complicated assembly environments is enhanced. The assembly method proposed is validated using the assembly process of the reducer as an instance. The OpenCV method is employed to produce geometric reference features for parts. The experimental results show that the proposed assembly method can provide visual guidance for the assembly process, improve the traditional list-type assembly component retrieval method, solve the drawbacks of the pre-set assembly guidance in the guidance system that may not be able to adapt to the changes of the assembly results in the actual operation, and can accurately instruct novices how to assemble, which is characterised by easy implementation, low cost and high accuracy, and is of great significance for improving the success rate and assembly efficiency of human-machine interactive assembly.

源语言英语
期刊International Journal of Production Research
DOI
出版状态已接受/待刊 - 2024

指纹

探究 'A digital twin-driven human-machine interactive assembly method based on lightweight multi-target detection and assembly feature generation' 的科研主题。它们共同构成独一无二的指纹。

引用此