首页 | 本学科首页   官方微博 | 高级检索  
     

基于多模态特征融合的自主驾驶车辆低辨识目标检测方法#br#
引用本文:邹伟,殷国栋,刘昊吉,耿可可,黄文涵,吴愿,薛宏伟.基于多模态特征融合的自主驾驶车辆低辨识目标检测方法#br#[J].中国机械工程,2021,32(9):1114-1125.
作者姓名:邹伟  殷国栋  刘昊吉  耿可可  黄文涵  吴愿  薛宏伟
作者单位:东南大学机械工程学院,南京,211189
基金项目:国家自然科学基金(51975118); 江苏省重点研发计划(BE2019004-2); 江苏省成果转化项目(BA2018023)
摘    要:针对自主驾驶车辆在真实驾驶环境下对低辨识目标的识别问题,提出了基于多模态特征融合的目标检测方法。基于Faster R-CNN算法设计多模态深度卷积神经网络,融合彩色图像、偏振图像、红外图像特征,提高对低辨识目标的检测性能;开发多模态(3种)图像低辨识度目标实时检测系统,探索多模态图像特征融合在自动驾驶智能感知系统中的应用。建立了人工标注过的多模态(3种)图像低辨识目标数据集,对深度学习神经网络进行训练,优化内部参数,使得该系统适用于复杂环境下对行人、车辆目标的检测和识别。实验结果表明,相对于传统的单模态目标检测算法,基于多模态特征融合的深度卷积神经网络对复杂环境下的低辨识目标具有更好的检测和识别性能。

关 键 词:自主驾驶  多模态特征融合  深度卷积神经网络  低辨识目标  智能感知  

Low-observable Target Detection Method for Autonomous Vehicles Based on Multi-modal Feature Fusion#br#
ZOU Wei,YIN Guodong,LIU Haoji,GENG Keke,HUANG Wenhan,WU Yuan,XUE Hongwei.Low-observable Target Detection Method for Autonomous Vehicles Based on Multi-modal Feature Fusion#br#[J].China Mechanical Engineering,2021,32(9):1114-1125.
Authors:ZOU Wei  YIN Guodong  LIU Haoji  GENG Keke  HUANG Wenhan  WU Yuan  XUE Hongwei
Affiliation:School of Mechanical Engineering,Southeast University,Nanjing,211189
Abstract:Aiming at the problems of low-observable target detection in autonomous vehicles under real driving conditions, a target detection method was proposed based on multi-modal feature fusion. In order to improve the detection on the low-observable targets, the multi-modal deep convolutional neural network was designed based on Faster R-CNN to fuse the features of RGB images, polarized images and infrared images, the development of the multi-modal (three) image low-observable target real-time detection system was studied and the applications of multi-modal image feature fusion in the intelligent perception system of autonomous vehicles were explored herein. A manually labeled multi-modal image dataset of low-observable targets was built. The deep learning neural network was trained to optimize internal parameters to make the system capable for both pedestrians and vehicle recognition in the complex environments. The experimental results indicate that the deep convolutional neural network based on the multi-modal fusion has a better performance on the low-observable target detection and recognition in complex environments than that of traditional single-modal methods.
Keywords:autonomous driving  multi-modal feature fusion  deep convolutional neural network  low-observable target  intelligent perception  
点击此处可从《中国机械工程》浏览原始摘要信息
点击此处可从《中国机械工程》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号