首页 | 本学科首页   官方微博 | 高级检索  
     

适用于旋转摄像下目标检测的图像补偿
引用本文:翟丁丁,王琦,杨燕,王凡,胡小鹏.适用于旋转摄像下目标检测的图像补偿[J].中国图象图形学报,2018,23(9):1393-1402.
作者姓名:翟丁丁  王琦  杨燕  王凡  胡小鹏
作者单位:大连理工大学计算机科学与技术学院, 大连 116024,大连理工大学计算机科学与技术学院, 大连 116024,大连理工大学计算机科学与技术学院, 大连 116024,大连理工大学计算机科学与技术学院, 大连 116024,大连理工大学计算机科学与技术学院, 大连 116024
基金项目:国家自然科学基金项目(61272523)
摘    要:目的 摄像机旋转扫描条件下的动目标检测研究中,传统的线性模型无法解决摄像机旋转扫描运动带来的图像间非线性变换问题,导致图像补偿不准确,在动目标检测时将引起较大误差,造成动目标虚假检测。为解决这一问题,提出了一种面阵摄像机旋转扫描条件下的图像补偿方法,其特点是能够同时实现背景运动补偿和图像非线性变换补偿,从而实现动目标的快速可靠检测。方法 首先进行图像匹配,然后建立摄像机旋转扫描非线性模型,通过参数空间变换将其转化为线性求解问题,采用Hough变换实现该方程参数的快速鲁棒估计。解决摄像机旋转扫描条件下获取的图像间非线性变换问题,从而实现图像准确补偿。在此基础上,可以利用帧间差分等方法检测出运动目标。结果 实验结果表明,在摄像机旋转扫描条件下,本文方法能够同时实现图像间的背景运动补偿和非线性变换补偿,可以去除大部分由于立体视差效应(parallax effects)产生的匹配错误。并且在实验中,本文方法处理速度可以达到50帧/s,满足实时性要求。结论 在面阵摄像机旋转扫描的条件下,相比于传统的基于线性模型的图像补偿方法,本文方法能够快速、准确地在背景补偿的基础上同时解决图像间非线性变换问题,从而更好地提取出运动目标,具有一定的实用价值。

关 键 词:旋转扫描  非线性变换  Hough变换  帧间差分  图像补偿
收稿时间:2018/1/22 0:00:00
修稿时间:2018/4/3 0:00:00

Image compensation for object detection under rotating camera
Zhai Dingding,Wang Qi,Yang Yan,Wang Fan and Hu Xiaopeng.Image compensation for object detection under rotating camera[J].Journal of Image and Graphics,2018,23(9):1393-1402.
Authors:Zhai Dingding  Wang Qi  Yang Yan  Wang Fan and Hu Xiaopeng
Affiliation:School of Computer Science and Technology, Dalian University of Technology, Dalian 116024, China,School of Computer Science and Technology, Dalian University of Technology, Dalian 116024, China,School of Computer Science and Technology, Dalian University of Technology, Dalian 116024, China,School of Computer Science and Technology, Dalian University of Technology, Dalian 116024, China and School of Computer Science and Technology, Dalian University of Technology, Dalian 116024, China
Abstract:Objective In the field of moving object detection, detecting fixed cameras has gradually matured. In numerous practical applications, camera motion, such as rotating scan, is required to increase the monitoring range and achieve gaze monitoring. In comparison with moving object detection under fixed camera conditions, the camera motion causes further difficulty in moving object detection. Image compensation is needed to eliminate the effect of image transformation caused by the camera motion. However, the traditional linear model cannot solve the nonlinear transform generated by the rotating scan movement of cameras. Under the condition of camera rotating scan, the key step of image compensation is to find an accurate motion model to describe image transformations between image frames, including rotation, translation, and scaling. The existing methods are not able to meet simultaneously the application requirements in terms of calculation time and accuracy. To solve this problem, a robust image compensation method under the condition of camera rotating scan is proposed, which can simultaneously achieve background motion and image nonlinear transform compensations. Method Our method includes four steps to achieve the goal of image compensation for camera rotating scan. First, corresponding point pairs are obtained through image matching. Feature points in the current frame are extracted through Features from Accelerated Segment Test (FAST) corner detection method and then matched with those in the previous frame. Subsequently, the global displacement of the background is computed through the matching points. On this basis, the Kalman filter updates its state and predicts the global displacement of the next frame and the positions of the current feature points appearing on the next image. Consequently, the feature points in the next frame matched with the current feature point are searched in the estimated image area. As the matched image area is reduced, feature matching accuracy can be improved. Second, a global transformation model between adjacent frames is established. In accordance with the analysis of the camera imaging mechanism for rotating scan, a nonlinear motion model is proposed. On the basis of the nonlinear motion model, a camera equation is established, which is further transformed into a linear problem by parameter space conversion. Third, Hough transform is utilized to estimate the parameters of the global motion model by using the matched point pairs. The global motion model is then mapped into the image to obtain the coordinate transformation relationship between adjacent images. Through the coordinate transformation relationship, the image is normalized to a unified coordinate system. This step leads to the implementation of background motion and nonlinear transform compensations. Finally, foreground objects are segmented from the image. The block-based inter-frame difference method is used to detect moving objects. To extract the foreground objects completely, the mathematical morphological opening is operated to eliminate isolated pixel points and small line segments. Then, a closing operation is performed to fill the holes in the object regions to maintain the completeness of the object. Result To prove the validity of the proposed method, different experiments are tested on several videos, including grass, traffic section, and indoor and other real scenes. All the experiments run on the Windows platform and the algorithm is implemented in C++. The adopted camera is Hikvision''s DS-2DF230IW-A with resolution of 1 280×720. To evaluate the performance of this method, we compare it with other global motion models, including affine transformation and local linear models. The experimental results can be summarized as follows. When the frame interval is small, the affine transformation model produces a large error. The local model and method presented in this paper can achieve improved results. As the rotation angle of the camera increases, nonlinear transformation becomes increasingly significant. The compensation result generates isolated pixels and small segments due to the occurrence of edge effect for the local linear model method. However, the method proposed in this paper can remove 90% of the isolated pixels and small segments, which can solve the problem of nonlinear transformation for camera rotating scan. In addition, the proposed method can be quickly solved by camera equations and Hough transform with processing speed of 50 frames per second (fps), which can meets real-time requirements. This method also has limitations in that it is only suitable for a low-pitch angle of the camera. The influence of pitch angle on the results of our method requires further analysis and research. Conclusion Detecting moving objects on a rotating scan camera is a difficult issue because the motion of the camera leads to the movement of the background and deformation of the image. Image compensation is required to remove background motion and image deformation. The quality of the image compensation method directly affects the final result of moving object detection. For traditional methods, nonlinear transformation is not considered thoroughly. The camera imaging mechanism under the condition of camera rotating scan is analyzed in this paper. Then, a nonlinear transformation model and the corresponding calculation method are presented. Results prove that compared with existing methods, our proposed method can achieve real-time performance and smaller compensation errors under the condition of camera rotating scan. On the basis of this method, the object detection problem in the dynamic background is converted into the object detection problem in the static background. Then, the reliable detection of the moving object can be achieved by using frame difference. As pan-tilt-zoom monitoring technology is increasingly widely used for scanning and monitoring large-scale scenes, the proposed method has practical values for object detection.
Keywords:rotating-scan  nonlinear transform  Hough transform  frame difference  image compensation
点击此处可从《中国图象图形学报》浏览原始摘要信息
点击此处可从《中国图象图形学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号