Abstract
We propose a visual servoing (VS) approach with deep learning to perform precise, robust, and real-time six degrees of freedom (6DOF) control of robotic manipulation to ease the extraction of image features and estimate the nonlinear relationship between the two-dimensional image space and the three-dimensional Cartesian space in traditional VS tasks. Owing to the superior learning capabilities of convolutional neural networks (CNNs), autonomous learning to select and extract image features from images and fitting the nonlinear mapping is achieved. A method for designing and generating a dataset from few or one image, by simulating the motion of an eye-in-hand robotic system is described herein. Therefore, network training requiring a large amount of data and difficult data collection occurring in actual situations can be solved. A dataset is utilized to train our VS convolutional neural network. Subsequently, a two-stream network is designed and the corresponding control approach is presented. This method converges robustly with the experimental results, in that the position error is less than 3 mm and the rotation error is less than 2.5◦ on average.
Original language | English |
---|---|
Pages (from-to) | 953-962 |
Number of pages | 10 |
Journal | Journal of Advanced Computational Intelligence and Intelligent Informatics |
Volume | 24 |
Issue number | 7 |
DOIs | |
Publication status | Published - 20 Dec 2020 |
Keywords
- CNN
- Data augmentation
- Deep learning
- Robotic manipulation
- Visual servoing