Self-supervised learning to visually detect terrain surfaces for autonomous robots operating in forested terrain

Shengyan Zhou*, Junqiang Xi, Matthew W. McDaniel, Takayuki Nishihata, Phil Salesses, Karl Iagnemma

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

70 引用 (Scopus)

摘要

Autonomous robotic navigation in forested environments is difficult because of the highly variable appearance and geometric properties of the terrain. In most navigation systems, researchers assume a priori knowledge of the terrain appearance properties, geometric properties, or both. In forest environments, vegetation such as trees, shrubs, and bushes has appearance and geometric properties that vary with change of seasons, vegetation age, and vegetation species. In addition, in forested environments the terrain surface is often rough, sloped, and/or covered with a surface layer of grass, vegetation, or snow. The complexity of the forest environment presents difficult challenges for autonomous navigation systems. In this paper, a self-supervised sensing approach is introduced that attempts to robustly identify a drivable terrain surface for robots operating in forested terrain. The sensing system employs both LIDAR and vision sensor data. There are three main stages in the system: feature learning, feature training, and terrain prediction. In the feature learning stage, 3D range points from LIDAR are analyzed to obtain an estimate of the ground surface location. In the feature training stage, the ground surface estimate is used to train a visual classifier to discriminate between ground and nonground regions of the image. In the prediction stage, the ground surface location can be estimated at high frequency solely from vision sensor data.

源语言英语
页(从-至)277-297
页数21
期刊Journal of Field Robotics
29
2
DOI
出版状态已出版 - 3月 2012

指纹

探究 'Self-supervised learning to visually detect terrain surfaces for autonomous robots operating in forested terrain' 的科研主题。它们共同构成独一无二的指纹。

引用此