首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
The amount of captured video is growing with the increased numbers of video cameras, especially the increase of millions of surveillance cameras that operate 24 hours a day. Since video browsing and retrieval is time consuming, most captured video is never watched or examined. Video synopsis is an effective tool for browsing and indexing of such a video. It provides a short video representation, while preserving the essential activities of the original video. The activity in the video is condensed into a shorter period by simultaneously showing multiple activities, even when they originally occurred at different times. The synopsis video is also an index into the original video by pointing to the original time of each activity. Video Synopsis can be applied to create a synopsis of an endless video streams, as generated by webcams and by surveillance cameras. It can address queries like "Show in one minute the synopsis of this camera broadcast during the past day'. This process includes two major phases: (i) An online conversion of the endless video stream into a database of objects and activities (rather than frames). (ii) A response phase, generating the video synopsis as a response to the user's query.  相似文献   

2.
Time-Delayed Correlation Analysis for Multi-Camera Activity Understanding   总被引:1,自引:0,他引:1  
We propose a novel approach to understanding activities from their partial observations monitored through multiple non-overlapping cameras separated by unknown time gaps. In our approach, each camera view is first decomposed automatically into regions based on the correlation of object dynamics across different spatial locations in all camera views. A new Cross Canonical Correlation Analysis (xCCA) is then formulated to discover and quantify the time delayed correlations of regional activities observed within and across multiple camera views in a single common reference space. We show that learning the time delayed activity correlations offers important contextual information for (i) spatial and temporal topology inference of a camera network; (ii) robust person re-identification and (iii) global activity interpretation and video temporal segmentation. Crucially, in contrast to conventional methods, our approach does not rely on either intra-camera or inter-camera object tracking; it thus can be applied to low-quality surveillance videos featured with severe inter-object occlusions. The effectiveness and robustness of our approach are demonstrated through experiments on 330 hours of videos captured from 17 cameras installed at two busy underground stations with complex and diverse scenes.  相似文献   

3.
Images/videos captured by portable devices (e.g., cellphones, DV cameras) often have limited fields of view. Image stitching, also referred to as mosaics or panorama, can produce a wide angle image by compositing several photographs together. Although various methods have been developed for image stitching in recent years, few works address the video stitching problem. In this paper, we present the first system to stitch videos captured by hand‐held cameras. We first recover the 3D camera paths and a sparse set of 3D scene points using CoSLAM system, and densely reconstruct the 3D scene in the overlapping regions. Then, we generate a smooth virtual camera path, which stays in the middle of the original paths. Finally, the stitched video is synthesized along the virtual path as if it was taken from this new trajectory. The warping required for the stitching is obtained by optimizing over both temporal stability and alignment quality, while leveraging on 3D information at our disposal. The experiments show that our method can produce high quality stitching results for various challenging scenarios.  相似文献   

4.
Video cameras must produce images at a reasonable frame-rate and with a reasonable depth of field. These requirements impose fundamental physical limits on the spatial resolution of the image detector. As a result, current cameras produce videos with a very low resolution. The resolution of videos can be computationally enhanced by moving the camera and applying super-resolution reconstruction algorithms. However, a moving camera introduces motion blur, which limits super-resolution quality. We analyze this effect and derive a theoretical result showing that motion blur has a substantial degrading effect on the performance of super-resolution. The conclusion is that, in order to achieve the highest resolution motion blur should be avoided. Motion blur can be minimized by sampling the space-time volume of the video in a specific manner. We have developed a novel camera, called the "jitter camera," that achieves this sampling. By applying an adaptive super-resolution algorithm to the video produced by the jitter camera, we show that resolution can be notably enhanced for stationary or slowly moving objects, while it is improved slightly or left unchanged for objects with fast and complex motions. The end result is a video that has a significantly higher resolution than the captured one.  相似文献   

5.
Videos captured by consumer cameras often exhibit temporal variations in color and tone that are caused by camera auto‐adjustments like white‐balance and exposure. When such videos are sub‐sampled to play fast‐forward, as in the increasingly popular forms of timelapse and hyperlapse videos, these temporal variations are exacerbated and appear as visually disturbing high frequency flickering. Previous techniques to photometrically stabilize videos typically rely on computing dense correspondences between video frames, and use these correspondences to remove all color changes in the video sequences. However, this approach is limited in fast‐forward videos that often have large content changes and also might exhibit changes in scene illumination that should be preserved. In this work, we propose a novel photometric stabilization algorithm for fast‐forward videos that is robust to large content‐variation across frames. We compute pairwise color and tone transformations between neighboring frames and smooth these pair‐wise transformations while taking in account the possibility of scene/content variations. This allows us to eliminate high‐frequency fluctuations, while still adapting to real variations in scene characteristics. We evaluate our technique on a new dataset consisting of controlled synthetic and real videos, and demonstrate that our techniques outperforms the state‐of‐the‐art.  相似文献   

6.
Shadow removal for videos is an important and challenging vision task. In this paper, we present a novel shadow removal approach for videos captured by free moving cameras using illumination transfer optimization. We first detect the shadows of the input video using interactive fast video matting. Then, based on the shadow detection results, we decompose the input video into overlapped 2D patches, and find the coherent correspondences between the shadow and non‐shadow patches via discrete optimization technique built on the patch similarity metric. We finally remove the shadows of the input video sequences using an optimized illumination transfer method, which reasonably recovers the illumination information of the shadow regions and produces spatio‐temporal shadow‐free videos. We also process the shadow boundaries to make the transition between shadow and non‐shadow regions smooth. Compared with previous works, our method can handle videos captured by free moving cameras and achieve better shadow removal results. We validate the effectiveness of the proposed algorithm via a variety of experiments.  相似文献   

7.
This paper presents a novel compressed sensing (CS) algorithm and camera design for light field video capture using a single sensor consumer camera module. Unlike microlens light field cameras which sacrifice spatial resolution to obtain angular information, our CS approach is designed for capturing light field videos with high angular, spatial, and temporal resolution. The compressive measurements required by CS are obtained using a random color-coded mask placed between the sensor and aperture planes. The convolution of the incoming light rays from different angles with the mask results in a single image on the sensor; hence, achieving a significant reduction on the required bandwidth for capturing light field videos. We propose to change the random pattern on the spectral mask between each consecutive frame in a video sequence and extracting spatio-angular-spectral-temporal 6D patches. Our CS reconstruction algorithm for light field videos recovers each frame while taking into account the neighboring frames to achieve significantly higher reconstruction quality with reduced temporal incoherencies, as compared with previous methods. Moreover, a thorough analysis of various sensing models for compressive light field video acquisition is conducted to highlight the advantages of our method. The results show a clear advantage of our method for monochrome sensors, as well as sensors with color filter arrays.  相似文献   

8.
We present a surveillance system, comprising wide field-of-view (FOV) passive cameras and pan/tilt/zoom (PTZ) active cameras, which automatically captures high-resolution videos of pedestrians as they move through a designated area. A wide-FOV static camera can track multiple pedestrians, while any PTZ active camera can capture high-quality videos of one pedestrian at a time. We formulate the multi-camera control strategy as an online scheduling problem and propose a solution that combines the information gathered by the wide-FOV cameras with weighted round-robin scheduling to guide the available PTZ cameras, such that each pedestrian is observed by at least one PTZ camera while in the designated area. A centerpiece of our work is the development and testing of experimental surveillance systems within a visually and behaviorally realistic virtual environment simulator. The simulator is valuable as our research would be more or less infeasible in the real world given the impediments to deploying and experimenting with appropriately complex camera sensor networks in large public spaces. In particular, we demonstrate our surveillance system in a virtual train station environment populated by autonomous, lifelike virtual pedestrians, wherein easily reconfigurable virtual cameras generate synthetic video feeds. The video streams emulate those generated by real surveillance cameras monitoring richly populated public spaces.A preliminary version of this paper appeared as [1].  相似文献   

9.
With the advent of wearable computing, personal imaging, photojournalism and personal video diaries, the need for automated archiving of the videos captured by them has become quite pressing. The principal device used to capture the human-environment interaction with these devices is a wearable camera (usually a head-mounted camera). The videos obtained from such a camera are raw and unedited versions of the visual interaction of the wearer (the user of the camera) with the surroundings. The focus of our research is to develop post-processing techniques that can automatically abstract videos based on episode detection. An episode is defined as a part of the video that was captured when the user was interested in an external event and paid attention to record it. Our research is based on the assumption that head movements have distinguishable patterns during an episode occurrence and these patterns can be exploited to differentiate between an episode and a non-episode. Here we present a novel algorithm exploiting the head and body behaviour for detecting the episodes. The algorithms performance is measured by comparing the ground truth (user-declared episodes) with the detected episodes. The experiments show the high degree of success we achieved with our proposed method on several hours of head-mounted video captured in varying locations.  相似文献   

10.
Integration of information from multiple cameras is essential in television production or intelligent surveillance systems. We propose an autonomous system for personalized production of basketball videos from multi-sensored data under limited display resolution. The problem consists in selecting the right view to display among the multiple video streams captured by the investigated camera network. A view is defined by the camera index and the parameters of the image cropped within the selected camera. We propose criteria for optimal planning of viewpoint coverage and camera selection. Perceptual comfort is discussed as well as efficient integration of contextual information, which is implemented by smoothing generated viewpoint/camera sequences to alleviate flickering visual artifacts and discontinuous story-telling artifacts. We design and implement the estimation process and verify it by experiments, which shows that our method efficiently reduces those artifacts.  相似文献   

11.
This paper presents an automatic real-time video matting system. The proposed system consists of two novel components. In order to automatically generate trimaps for live videos, we advocate a Time-of-Flight (TOF) camera-based approach to video bilayer segmentation. Our algorithm combines color and depth cues in a probabilistic fusion framework. The scene depth information returned by the TOF camera is less sensitive to environment changes, which makes our method robust to illumination variation, dynamic background and camera motion. For the second step, we perform alpha matting based on the segmentation result. Our matting algorithm uses a set of novel Poisson equations that are derived for handling multichannel color vectors, as well as the depth information captured. Real-time processing speed is achieved through optimizing the algorithm for parallel processing on graphics hardware. We demonstrate the effectiveness of our matting system on an extensive set of experimental results.  相似文献   

12.
With the recent popularization of mobile video cameras including camera phones, a new technology, mobile video surveillance, which uses mobile video cameras for video surveillance has been emerging. Such videos, however, may infringe upon the privacy of others by disclosing privacy sensitive information (PSI), i.e., their appearances. To prevent videos from infringing on the right to privacy, new techniques are required that automatically obscure PSI regions. The problem is how to determine the PSI regions to be obscured while maintaining enough video content to present the camera persons’ capture-intentions, i.e., what they want to record in their videos to achieve their surveillance tasks. To this end, we introduce a new concept called intended human objects that are defined as human objects essential for capture-intentions, and develop a new method called intended human object detection that automatically detects the intended human objects in videos taken by different camera persons. Through the process of intended human object detection, we develop a system for automatically obscuring PSI regions. We experimentally show the performance of intended human object detection and the contributions of the features used. Our user study shows the potential applicability of our proposed system.  相似文献   

13.
In this paper we present an automatic method for calibrating a network of cameras that works by analyzing only the motion of silhouettes in the multiple video streams. This is particularly useful for automatic reconstruction of a dynamic event using a camera network in a situation where pre-calibration of the cameras is impractical or even impossible. The key contribution of this work is a RANSAC-based algorithm that simultaneously computes the epipolar geometry and synchronization of a pair of cameras only from the motion of silhouettes in video. Our approach involves first independently computing the fundamental matrix and synchronization for multiple pairs of cameras in the network. In the next stage the calibration and synchronization for the complete network is recovered from the pairwise information. Finally, a visual-hull algorithm is used to reconstruct the shape of the dynamic object from its silhouettes in video. For unsynchronized video streams with sub-frame temporal offsets, we interpolate silhouettes between successive frames to get more accurate visual hulls. We show the effectiveness of our method by remotely calibrating several different indoor camera networks from archived video streams.  相似文献   

14.
Video stabilization is an important technique in present day digital cameras as most of the cameras are hand-held, mounted on moving platforms or subjected to atmospheric vibrations. In this paper we propose a novel video stabilization scheme based on estimating the camera motion using maximally stable extremal region features. These features traditionally used in wide baseline stereo problems were never explored for video stabilization purposes. Through our extensive experiments show we how some properties of these region features are suitable for the stabilization task. After estimating the global camera motion parameters using these region features, we smooth the motion parameters using a gaussian filter to retain the desired motion. Finally, motion compensation is carried out to obtain a stabilized video sequence. A number of examples on real and synthetic videos demonstrate the effectiveness of our proposed approach. We compare our results to existing techniques and show how our proposed approach compares favorably to them. Interframe Transformation Fidelity is used for objective evaluation of our proposed approach.  相似文献   

15.
In this work we propose methods that exploit context sensor data modalities for the task of detecting interesting events and extracting high-level contextual information about the recording activity in user generated videos. Indeed, most camera-enabled electronic devices contain various auxiliary sensors such as accelerometers, compasses, GPS receivers, etc. Data captured by these sensors during the media acquisition have already been used to limit camera degradations such as shake and also to provide some basic tagging information such as the location. However, exploiting the sensor-recordings modality for subsequent higher-level information extraction such as interesting events has been a subject of rather limited research, further constrained to specialized acquisition setups. In this work, we show how these sensor modalities allow inferring information (camera movements, content degradations) about each individual video recording. In addition, we consider a multi-camera scenario, where multiple user generated recordings of a common scene (e.g., music concerts) are available. For this kind of scenarios we jointly analyze these multiple video recordings and their associated sensor modalities in order to extract higher-level semantics of the recorded media: based on the orientation of cameras we identify the region of interest of the recorded scene, by exploiting correlation in the motion of different cameras we detect generic interesting events and estimate their relative position. Furthermore, by analyzing also the audio content captured by multiple users we detect more specific interesting events. We show that the proposed multimodal analysis methods perform well on various recordings obtained in real live music performances.  相似文献   

16.
We present a method for capturing the skeletal motions of humans using a sparse set of potentially moving cameras in an uncontrolled environment. Our approach is able to track multiple people even in front of cluttered and non‐static backgrounds, and unsynchronized cameras with varying image quality and frame rate. We completely rely on optical information and do not make use of additional sensor information (e.g. depth images or inertial sensors). Our algorithm simultaneously reconstructs the skeletal pose parameters of multiple performers and the motion of each camera. This is facilitated by a new energy functional that captures the alignment of the model and the camera positions with the input videos in an analytic way. The approach can be adopted in many practical applications to replace the complex and expensive motion capture studios with few consumer‐grade cameras even in uncontrolled outdoor scenes. We demonstrate this based on challenging multi‐view video sequences that are captured with unsynchronized and moving (e.g. mobile‐phone or GoPro) cameras.  相似文献   

17.

Video surveillance cameras capture huge amount of data 24 hours a day. However, most of these videos contain redundant data which make the process difficult for browsing and analysis. A significant amount of research findings have been made in summarization of recorded video, but such schemes do not have much impact on video surveillance applications. On the contrary, video synopsis is a smart technology that preserves all the activities of every single object and projects them concurrently in a condensed time. The energy minimization module in video synopsis framework plays a vital role, which in turn minimizes the activity loss, number of collision and temporal consistency cost. In most of the reported schemes, Simulated Annealing (SA) algorithm is employed to solve the energy minimization problem. However, it suffers from slow convergence rate resulting in a high computational load to the system. In order to mitigate this issue, this article presents an improved energy minimization scheme using hybridization of SA and Teaching Learning based Optimization (TLBO) algorithms. The suggested framework for static surveillance video synopsis generation consists of four computational modules, namely, Object detection and segmentation, Tube formation, Optimization, and finally Stitching and the central focus is on the optimization module. Thus, the present work deals with an improved hybrid energy minimization problem to achieve global optimal solution with reduced computational time. The motivation behind hybridization (HSATLBO) is that TLBO algorithm has the ability to search rigorously, ensuring to reach the optimum solution with less computation. On the contrary, SA reaches the global optimum solution, but it may get disarrayed and miss some critical search points. Exhaustive experiments are carried out and results compared with that of benchmark schemes in terms of minimizing the activity, collision and temporal consistency costs. All the experiments are conducted on five widely used videos taken from standard surveillance video data set (PETS 2001, MIT Surveillance Dataset, ChangeDetection.Net, PETS 2006 and UMN Dataset) as well as one real generated surveillance video from the IIIT Bhubaneswar Surveillance Dataset. To make a fair comparison, additionally, performance of the proposed hybrid scheme to solve video synopsis optimization problem is also compared with that of the other benchmark functions. Experimental evaluation and analysis confirm that the proposed scheme outperforms other state-of-the-art approaches. Finally, the suggested scheme can be easily and reliably deployed in the off-line video synopsis generation.

  相似文献   

18.
Detecting and tracking people in scenes monitored by cameras is an important step in many application scenarios such as surveillance, urban planning or behavioral studies to name a few. The amount of data produced by camera feeds is so large that it is also vital that these steps be performed with the utmost computational efficiency and often even real-time. We propose SCOOP, a novel algorithm that reliably localizes people in camera feeds, using only the output of a simple background removal technique. SCOOP can handle a single or many video feeds. At the heart of our technique there is a sparse model for binary motion detection maps that we solve with a novel greedy algorithm based on set covering. We study the convergence and performance of the algorithm under various degradation models such as noisy observations and crowded environments, and we provide mathematical and experimental evidence of both its efficiency and robustness using standard datasets. This clearly shows that SCOOP is a viable alternative to existing state-of-the-art people localization algorithms, with the marked advantage of real-time computations.  相似文献   

19.
基于运动目标轨迹优化的监控视频浓缩方法   总被引:1,自引:0,他引:1  
视频浓缩是包含原视频有效信息的简短表示,以便于视频的存储、浏览和检索。然而,大部分视频浓缩方法得到的浓缩视频中会丢失少量目标,不能完整表达原始视频的全部内容。本文介绍了一种基于目标轨迹优化的视频浓缩方法。首先使用改进的目标轨迹提取算法提取原视频中目标的 轨迹,然后利用马尔可夫随机场模型和松弛线性规划算法得到每条轨迹的最优时间标签,将其与背景序列和目标轨迹结合生成浓缩视频。实验结果表明,与传统的视频浓缩方法相比,本文方法生成的浓缩视频具有较高的浓缩比,保证了信息的完整性又具有良好的视觉效果。  相似文献   

20.
We present a novel approach to optimally retarget videos for varied displays with differing aspect ratios by preserving salient scene content discovered via eye tracking. Our algorithm performs editing with cut, pan and zoom operations by optimizing the path of a cropping window within the original video while seeking to (i) preserve salient regions, and (ii) adhere to the principles of cinematography. Our approach is (a) content agnostic as the same methodology is employed to re‐edit a wide‐angle video recording or a close‐up movie sequence captured with a static or moving camera, and (b) independent of video length and can in principle re‐edit an entire movie in one shot. Our algorithm consists of two steps. The first step employs gaze transition cues to detect time stamps where new cuts are to be introduced in the original video via dynamic programming. A subsequent step optimizes the cropping window path (to create pan and zoom effects), while accounting for the original and new cuts. The cropping window path is designed to include maximum gaze information, and is composed of piecewise constant, linear and parabolic segments. It is obtained via L(1) regularized convex optimization which ensures a smooth viewing experience. We test our approach on a wide variety of videos and demonstrate significant improvement over the state‐of‐the‐art, both in terms of computational complexity and qualitative aspects. A study performed with 16 users confirms that our approach results in a superior viewing experience as compared to gaze driven re‐editing [ JSSH15 ] and letterboxing methods, especially for wide‐angle static camera recordings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号