首页 | 本学科首页   官方微博 | 高级检索  
     

基于循环卷积神经网络的单目视觉里程计
引用本文:陈宗海,洪洋,王纪凯,葛振华.基于循环卷积神经网络的单目视觉里程计[J].机器人,2019,41(2):147-155.
作者姓名:陈宗海  洪洋  王纪凯  葛振华
作者单位:中国科学技术大学自动化系,安徽 合肥,230027;中国科学技术大学自动化系,安徽 合肥,230027;中国科学技术大学自动化系,安徽 合肥,230027;中国科学技术大学自动化系,安徽 合肥,230027
摘    要:提出了一种基于卷积长短期记忆(LSTM)网络和卷积神经网络(CNN)的单目视觉里程计方法,命名为LSTMVO(LSTM visual odometry).LSTMVO采用无监督的端到端深度学习框架,对单目相机的6-DoF位姿以及场景深度进行同步估计.整个网络框架包含位姿估计网络以及深度估计网络,其中位姿估计网络是以端到端方式实现单目位姿估计的深度循环卷积神经网络(RCNN),由基于卷积神经网络的特征提取和基于循环神经网络(RNN)的时序建模组成,深度估计网络主要基于编码器和解码器架构生成稠密的深度图.同时本文还提出了一种新的损失函数进行网络训练,该损失函数由图像序列之间的时序损失、深度平滑度损失和前后一致性损失组成.基于KITTI数据集的实验结果表明,通过在原始单目RGB图像上进行训练,LSTMVO在位姿估计精度以及深度估计精度方面优于现有的主流单目视觉里程计方法,验证了本文提出的深度学习框架的有效性.

关 键 词:卷积LSTM  循环卷积神经网络  单目视觉里程计  无监督学习

Monocular Visual Odometry Based on Recurrent Convolutional Neural Networks
CHEN Zonghai,HONG Yang,WANG Jikai,GE Zhenhua.Monocular Visual Odometry Based on Recurrent Convolutional Neural Networks[J].Robot,2019,41(2):147-155.
Authors:CHEN Zonghai  HONG Yang  WANG Jikai  GE Zhenhua
Affiliation:(Department of Automation, University of Science and Technology of China, Hefei 230027, China)
Abstract:A monocular visual odometry method based on convolutional long short term memory(LSTM) network and convolutional neural network(CNN) is proposed, named LSTM visual odometry(LSTMVO). LSTMVO uses an unsupervised end-to-end deep learning framework to simultaneously estimate the 6-DoF(degree of freedom) pose and scene depth of monocular cameras. The entire network framework includes a pose estimation network and a depth estimation network.The pose estimation network is a deep recurrent convolutional neural network(RCNN) that implements monocular pose estimation from end to end, consisting of feature extraction based on convolutional neural networks and time-series modeling based on recurrent neural networks(RNN). The depth estimation network generates dense depth maps primarily based on the encoder-decoder architecture. At the same time, a new loss function for network training is proposed. The loss function consists of time series loss, loss of depth smoothness, and loss of consistency before and after the image sequence. The experimental results based on KITTI dataset show that by training on the original monocular RGB image, LSTMVO is superior to the existing mainstream monocular visual odometry methods in terms of pose estimation accuracy and depth estimation accuracy, verifying the effectiveness of the deep learning framework proposed.
Keywords:convolutional LSTM (long short term memory)  RCNN (recurrent convolutional neural network)  monocular visual odometry  unsupervised learning
本文献已被 CNKI 维普 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号