Robotic Visual Servoing Based on Convolutional Neural Network

Jingshu Liu, Yuan Li, Renxing Yang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Citations (Scopus)

Abstract

In order to simplify the problems of image feature choosing and extraction, and the nonlinear mapping estimation in traditional visual servoing, we present a visual servoing method based on convolutional neural network (CNN) to realize the precise, robust, and real-time 6DOF control of robotic manipulation. Herein we propose an approach to design and generate a dataset from a single image, by simulating the motion of eye-in-hand robotic system. This dataset is utilized to train our visual servoing CNN which computes the relative pose between robotic system and external environment. Then the output of the network is employed in the visual control schemes. This method converges robustly with experimental result of a position error less than one millimeter and a rotation error less than half a degree in average in simulation.

Original languageEnglish
Title of host publicationProceedings - 2020 Chinese Automation Congress, CAC 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages2245-2250
Number of pages6
ISBN (Electronic)9781728176871
DOIs
Publication statusPublished - 6 Nov 2020
Event2020 Chinese Automation Congress, CAC 2020 - Shanghai, China
Duration: 6 Nov 20208 Nov 2020

Publication series

NameProceedings - 2020 Chinese Automation Congress, CAC 2020

Conference

Conference2020 Chinese Automation Congress, CAC 2020
Country/TerritoryChina
CityShanghai
Period6/11/208/11/20

Keywords

  • CNN
  • robotic manipulation
  • visual servoing

Fingerprint

Dive into the research topics of 'Robotic Visual Servoing Based on Convolutional Neural Network'. Together they form a unique fingerprint.

Cite this