首页 | 本学科首页   官方微博 | 高级检索  
     


R-VPCG: RGB image feature fusion-based virtual point cloud generation for 3D car detection
Affiliation:1. Air Force Institute of Technology, 2950 Hobson Way (AFIT/ENV), Wright Patterson AFB, OH 45433, USA;1. College of Faculty of Printing, Packaging Engineering and Digital Media Technology, Xi’an University of Technology, Xi’an 710048, China;2. Key Lab of Printing and Packaging Engineering of Shaanxi Province, Xi’an 710048, China;3. Printing and Packaging Engineering Technology Research Centre of Shaanxi Province, Xi’an 710048, China;1. Special Display and Imaging Technology Innovation Center of Anhui Province, National Engineering Laboratory of Special Display Technology, Hefei University of Technology, Hefei, Anhui 230009, China;2. Academy of Opto-electric Technology, Hefei University of Technology, Hefei, Anhui 230009, China;3. School of Instrument Science and Opto-electronics Engineering, Hefei University of Technology, Hefei, Anhui 230009, China;1. College of Art and Design, Zhengzhou University of Light Industry, Zhengzhou, Henan 450000, China;2. Normal College, Yichun Vocational College, Yichun, Heilongjiang 153000, China;3. School of Civil Engineering and Architecture, Jiaozuo University, Jiaozuo, Henan 454000, China;4. Department of Psychology, College of Education, Hebei Normal University, Shijiazhuang 050024, China
Abstract:Although 3D object detection methods based on feature fusion have made great progress, the methods still have the problem of low precision due to sparse point clouds. In this paper, we propose a new feature fusion-based method, which can generate virtual point cloud and improve the precision of car detection. Considering that RGB images have rich semantic information, this method firstly segments the cars from the image, and then projected the raw point clouds onto the segmented car image to segment point clouds of the cars. Furthermore, the segmented point clouds are input to the virtual point cloud generation module. The module regresses the direction of car, then combines the foreground points to generate virtual point clouds and superimposed with the raw point cloud. Eventually, the processed point cloud is converted to voxel representation, which is then fed into 3D sparse convolutional network to extract features, and finally a region proposal network is used to detect cars in a bird’s-eye view. Experimental results on KITTI dataset show that our method is effective, and the precision have significant advantages compared to other similar feature fusion-based methods.
Keywords:3D object detection  Point clouds  Autonomous driving  Segmentation
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号