SIA OpenIR  > 机器人学研究室
视觉里程计方法研究
Alternative TitleResearch on Visual Odometry Methods
杜英魁1,2
Department机器人学研究室
Thesis Advisor韩建达 ; 唐延东
ClassificationTP391.41
Keyword视觉里程计 机器人 定位导航
Call NumberTP391.41/D79/2010
Pages106页
Degree Discipline模式识别与智能系统
Degree Name博士
2010-06-08
Degree Grantor中国科学院沈阳自动化研究所
Place of Conferral沈阳
Abstract本文以视觉里程计的位姿估计精度和实时性作为研究目标,着力解决环境特征点提取和跟踪、特征点三维重建及其不确定度、光流估计和位姿估计等关键技术问题,分别对单目、双目和多目视觉里程计方法进行了算法研究及实验验证。实验结果表明了本文提出的视觉里程计方法,能够在不依赖GPS的条件下对长时间运动的机器人进行精确定位。 在特征点匹配中,通过Fusiello算法对立体视觉图像进行极线校正,结合最大视差约束,使特征点匹配搜索范围从二维降至一个一维区间。提出了一种改进的MNCC特征点相关性算法,能够强化环境的纹理特征,并对光照变化具有较好的鲁棒性,从而提高了特征点相关性计算的稳定性。根据正确跟踪特征点对应的空间点与机器人相对运动一致的原理,滤除误匹配和误跟踪的特征点。通过协方差矩阵描述了特征点三维重建的不确定度,通过加权优化算法,可有效减小特征点三维重建误差对机器人位姿估计精度的影响。 根据重建的空间点三维坐标变化,估计机器人的位姿变化,是一个等式约束的非线性优化问题。首先利用Lagrange定理将其转化为一个无约束的非线性最优化问题,根据特征点三维重建的不确定度均衡其在最小二乘估计中的作用,通过RANSAC法和加权最小二乘法估计位姿变化的初值。通过Rodrigues公式,将待估计的位姿参数减少至6个,采用稀疏LM算法对机器人位姿估计初值进行快速的非线性优化,获得更为精确的位姿估计结果。 提出了一种基于直线约束光流的单目视觉里程计方法,并给出了直线约束光流的定义。通过对直线平面运动进行分解,利用两个平移运动参数和一个旋转运动参数,描述一条直线段在图像中的运动场。这样做的优点在于,首先直线约束光流的计算量小,计算速度快,解决了传统光流计算精度与耗时矛盾的问题。其次,直线约束光流的图像运动参数与机器人空间平面运动的位姿参数分别相对应,可根据相机成像投影模型,建立直线光流与机器人平面运动位姿变化量的映射关系,直接分别求解机器人的平面运动位姿参数,避免了由旋转运动导致的非线性求解问题。 双目视觉里程计能够估计机器人6个自由度的位姿变化,但其视觉算法计算量大,导致实时性较差。直线约束光流单目视觉里程计的视觉算法计算量小,实时性较好,但位姿估计参数仅有3个,难以满足复杂地形条件下的机器人位姿估计要求。针对这两类视觉里程计方法存在的问题,受昆虫复眼特性的启发,提出了一种可实时估计机器人6个自由度位姿变化的阵列式高速视觉里程计方法。阵列式视觉里程计的核心思想是并行计算和数据融合。由多个高速视觉测量单元组成一个阵列,各个高速视觉测量单元并行计算,高速视觉测量单元的视觉算法仅对输入的立体视觉图像中一个较小的设定区域进行计算,使单个视觉测量单元可以达到较高的处理速度。通过各测量单元输出数据的数据融合,最终获得高精度的机器人位姿估计。本文设计了三种不同结构的视觉测量单元阵列,给出了相应的视觉测量单元坐标变换关系矩阵和数据校验计算算法,进行仿真实验验证。
Other AbstractVisual odometry is an important application of computer vision and hot topic on robot autonomous navigation research. The relative positioning methods, such as wheel odometry and IMU (Inertia Measurement Unit), and the absolute positioning methods, such as GPS (Global Positioning System), are the two kinds of positioning methods for mobile robot in unstructured environments. Usually a robot is located by motion estimation and dead-reckoning in relative positioning method, nevertheless the error of wheel odometry will be greatly increased as the robot moving in high slip environments. For IMU, drifting is its main drawback. Therefore, wheel odometry and IMU must be rectified by absolute positioning method periodically for a long time robot running. As an absolute positioning method, the GPS is still only applied on the earth’s surface and is sensitive to the signal jamming or other uncertainty issues, such as satellite faults. For some robot missions in some extreme environments, such as polar region and space exploration, these traditional methods are hard to meet the positioning demand. Visual odometry is an new technology for robot positioning, which is implemented by estimating the motion of robot with only the captured images or videos. Its core idea is to estimate robot motion. by utilizing the coordinate changing of extracted features in image sequence and the project model of cameras. It is a relative positioning method and a kind of passive non-contact measurement sensor. Compared to traditional methods, such as wheel odometry, IMU and GPS, visual odometry can be applied in almost all kinds of visible light irradiating environments and its accumulate error is small. Although it is still a kind of relative positioning method, it has been verified in existing research results that the location error of visual odometry is non-homography and can be limited in a small range even for a long time mission of robots in high slipping environments. So visual odometry is an efficient self-locating method for mobile robot in GPS denied environments. The key algorithms of visual odomentry include feature extracting, feature matching and tracking, reconstruction and motion estimation. Based on the algorithm research, the models for monocular, binocular and multicular visual odometry are proposed and implemented for real-time applications. For the feature point matching, we utilize Fusiello epipolar rectifying algorithm and disparity constraint to reduce search range of candidate features from a 2D region to a short 1D interval. An improved MNCC algorithm is proposed for correlation computing between feature points, which is robust to illumination and can enhance image texture. The consistency of relative motion between world points and robot is used to reject outliers. Uncertainties of feature point reconstruction are defined by convariance matrix, which is utilized in motion estimation algorithm of binocular visual odometry to reduce the error propagation. Motion estimation is a nonlinear equality constrained optimization problem. It is transformed into a nonlinear unconstrained optimization by Lagrange theorem. The initial motion is estimated by RANSAC and Weighted Least Square methods to reduce the effect of reconstruction uncertainty. Spare LM algorithm is used to estimate the motion accurately and quickly. A monocular visual odometry using straight-line constrained optical flow is proposed. The straight-line constrained optical flow is defined by decomposing the motion of a line segment in image plane into a rotation and two directional translations. The parameters of optical flow are similar to robot motion, so we can estimate the motion by combining the camera model separately. The estimation of straight line constraint optical flow is much faster than traditional optical flow methods and avoids to resolve a nonlinear problem. Binocular visual odometry can estimate 6 DOF robot motion, but the computing cost of vision algorithm is large. It can be only applied for slowly moving robots. The monocular visual odometry using straight-line constrained optical flow can be used for real-time application, but only 3 DOF motion can be estimated. Inspired by compound eyes of insects, we proposed an arrangement high speed visual odometry to estimate the 6 DOF robot motion in real time. The key idea of arrangement visual odometry is parallel computing and data fusing. The arrangement is composed of a certain number of high speed vision measurement units. For one high speed processing, only a small region of image is computed in each unit. An accurate motion estimation can be obtained by fusing the data of all units.
Language中文
Contribution Rank1
Document Type学位论文
Identifierhttp://ir.sia.cn/handle/173321/9404
Collection机器人学研究室
Affiliation1.中国科学院沈阳自动化研究所
2.中国科学院研究生院
Recommended Citation
GB/T 7714
杜英魁. 视觉里程计方法研究[D]. 沈阳. 中国科学院沈阳自动化研究所,2010.
Files in This Item:
File Name/Size DocType Version Access License
视觉里程计方法研究.pdf(5887KB) 开放获取CC BY-NC-SAApplication Full Text
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[杜英魁]'s Articles
Baidu academic
Similar articles in Baidu academic
[杜英魁]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[杜英魁]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.