首页 | 本学科首页   官方微博 | 高级检索  
     


Appearance based background subtraction for PTZ cameras
Affiliation:1. Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230027, China;2. Key Laboratory of Electromagnetic Space Information, Chinese Academy of Sciences, China;3. Microsoft Research, Beijing 100000, China;1. School of Physics, University of Hyderabad, Central University P.O., Hyderabad 500046, Andhra Pradesh, India;1. Key Laboratory of Advanced Displays and System Application, Ministry of Education, Shanghai, China;2. School of Communication and Information Engineering, Shanghai University, Shanghai, China;1. LRDSI Laboratory, Saad Dahlab University – Blida1, Blida, Algeria;2. Department of Computer Science and Engineering, University of Quebec in Outaouais, Gatineau, QC, Canada J8X 3X7
Abstract:Traditional background subtraction algorithms assume the camera is static and are based on simple per-pixel models of scene appearance. This leads to false detections when the camera moves. While this can sometimes be addressed by online image registration, this approach is prone to dramatic failures and long-term drift. We present a novel background subtraction algorithm designed for pan-tilt-zoom cameras that overcomes this challenge without the need for explicit image registration. The proposed algorithm automatically trains a discriminative background model, which is global in the sense that it is the same regardless of image location. Our approach first extracts multiple features from across the image and uses principal component analysis for dimensionality reduction. The extracted features are then grouped to form a Bag of Features. A global background model is then learned from the bagged features using Support Vector Machine. The proposed approach is fast and accurate. Having a single global model makes it computationally inexpensive in comparison to traditional pixel-wise models. It outperforms several state-of-the-art algorithms on the CDnet 2014 pan-tilt-zoom and baseline categories and Hopkins155 dataset. In particular, it achieves an F-Measure of 75.41% on the CDnet dataset PTZ category, significantly better than the previously reported best score of 62.07%. These results show that by removing the coupling between detection model and spatial location, we significantly increase the robustness to camera motion.
Keywords:Background subtraction  Foreground detection  Image segmentation
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号