首页 | 本学科首页   官方微博 | 高级检索  
     


Real-time joint disparity and disparity flow estimation on programmable graphics hardware
Affiliation:1. Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, No. 28, Xianning West Road, Xi’an, Shaanxi 710049, PR China;2. Dept. of Computer Science, University of Texas at San Antonio, TX 78249, USA;1. Key Laboratory of Fundamental Synthetic Vision Graphics and Image for National Defense, Sichuan University, Chengdu 610064, PR China;2. School of Aeronautics and Astronautics, Sichuan University, Chengdu 610064, PR China;3. School of Computer Science and Engineering, Sichuan University, Chengdu 610064, PR China;1. Department of Mathematics, IIT Kharagpur, India;2. Techno India University, Kolkata, India;3. Airbus Group HQ HWD1, Germany;4. ISI, Kolkata, India;5. Department of Computer Science, IIT Kharagpur, India;1. Department of Electrical and Computer Engineering, Seoul National University, Seoul 151-742, Republic of Korea;2. Department of Information and Communication Engineering, Inha University, Incheon 402-751, Republic of Korea;1. School of Electrical & Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Penang, Malaysia;2. Department of Electronic and Computer Engineering Technology, Faculty of Engineering Technology, Universiti Teknikal Malaysia Melaka, 76100 Durian Tunggal, Melaka, Malaysia
Abstract:Disparity flow depicts the 3D motion of a scene in the disparity space of a given view and can be considered as view-dependent scene flow. A novel algorithm is presented to compute disparity maps and disparity flow maps in an integrated process. Consequently, the disparity flow maps obtained helps to enforce the temporal consistency between disparity maps of adjacent frames. The disparity maps found also provides the spatial correspondence information that can be used to cross-validate disparity flow maps of different views. Two different optimization approaches are integrated in the presented algorithm for searching optimal disparity values and disparity flows. The local winner-take-all approach runs faster, whereas the global dynamic programming based approach produces better results. All major computations are performed in the image space of the given view, leading to an efficient implementation on programmable graphics hardware. Experimental results on captured stereo sequences demonstrate the algorithm’s capability of estimating both 3D depth and 3D motion in real-time. Quantitative performance evaluation using synthetic data with ground truth is also provided.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号