首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
设计一种基于快门同步的高清道口智能补光系统。以MSP430F2272微控制器为核心,采集摄像机快门、车检器信号,控制LED灯、爆闪灯与摄像机同步工作。高清道口智能补光系统,既满足车牌识别的补光要求,又满足抓拍车辆前排驾乘人员的要求,同时显著降低功耗,延长了爆闪灯的使用周期。系统可以在Pc控制下工作,也可以在光敏电阻、实时时钟的辅助下,脱机工作。  相似文献   

2.
We propose an efficient real-time automatic license plate recognition (ALPR) framework, particularly designed to work on CCTV video footage obtained from cameras that are not dedicated to the use in ALPR. At present, in license plate detection, tracking and recognition are reasonably well-tackled problems with many successful commercial solutions being available. However, the existing ALPR algorithms are based on the assumption that the input video will be obtained via a dedicated, high-resolution, high-speed camera and is/or supported by a controlled capture environment, with appropriate camera height, focus, exposure/shutter speed and lighting settings. However, typical video forensic applications may require searching for a vehicle having a particular number plate on noisy CCTV video footage obtained via non-dedicated, medium-to-low resolution cameras, working under poor illumination conditions. ALPR in such video content faces severe challenges in license plate localization, tracking and recognition stages. This paper proposes a novel approach for efficient localization of license plates in video sequence and the use of a revised version of an existing technique for tracking and recognition. A special feature of the proposed approach is that it is intelligent enough to automatically adjust for varying camera distances and diverse lighting conditions, a requirement for a video forensic tool that may operate on videos obtained by a diverse set of unspecified, distributed CCTV cameras.  相似文献   

3.
Pan–tilt–zoom (PTZ) cameras are well suited for object identification and recognition in far-field scenes. However, the effective use of PTZ cameras is complicated by the fact that a continuous online camera calibration is needed and the absolute pan, tilt and zoom values provided by the camera actuators cannot be used because they are not synchronized with the video stream. So, accurate calibration must be directly extracted from the visual content of the frames. Moreover, the large and abrupt scale changes, the scene background changes due to the camera operation and the need of camera motion compensation make target tracking with these cameras extremely challenging. In this paper, we present a solution that provides continuous online calibration of PTZ cameras which is robust to rapid camera motion, changes of the environment due to varying illumination or moving objects. The approach also scales beyond thousands of scene landmarks extracted with the SURF keypoint detector. The method directly derives the relationship between the position of a target in the ground plane and the corresponding scale and position in the image and allows real-time tracking of multiple targets with high and stable degree of accuracy even at far distances and any zoom level.  相似文献   

4.

Dynamic range of the scene can be significantly wider than the dynamic range of an image because of limitations of A/D conversion. In such a situation, numerous details of the scene cannot be adequately shown on the image. Standard industrial digital cameras are equipped with an auto-exposure function that automatically sets both the aperture value and cameras exposure time. When measuring a scene with atypical distribution of light and dark elements, the indicated auto-exposure time may not be optimal. The aim of work was to improve, with minimal cost, the performance of standard industrial digital cameras. We propose a low complexity method for creating HDR-like image using three images captured with different exposure times. The proposed method consists of three algorithms: (1) algorithm for estimating whether the auto-exposure time is optimal, (2) algorithm which determines exposure times for two additional images (one with shorter and another with longer than auto-exposure time), and (3) algorithm for HDR-like imaging based on fusion of three previously obtained images. Method is implemented on FPGA inserted into standard industrial digital camera. Results show that the proposed approach produces high quality HDR-like scene-mapped 8-bit images with minimal computational cost. All improvements may be noticed through the performance evaluation.

  相似文献   

5.
针对传统三通道RGB相机在光源光谱已知条件下不能完全恢复物体表面光谱反射率的缺点,本文构造一套多光谱成像阵列相机系统。该阵列相机采用12个大恒DH-HV1300FM型相机,且11个镜头装有波长不同的滤光片。本文结合阵列相机多通道数的优势,提出一种MSIS-GOC(Multi-spectral Imaging System based on Group of Camera)算法,能够可靠并有效地重建场景的光谱反射率。仿真实验结果分析验证了该系统的有效性。  相似文献   

6.
We present a real‐time multi‐view facial capture system facilitated by synthetic training imagery. Our method is able to achieve high‐quality markerless facial performance capture in real‐time from multi‐view helmet camera data, employing an actor specific regressor. The regressor training is tailored to specified actor appearance and we further condition it for the expected illumination conditions and the physical capture rig by generating the training data synthetically. In order to leverage the information present in live imagery, which is typically provided by multiple cameras, we propose a novel multi‐view regression algorithm that uses multi‐dimensional random ferns. We show that higher quality can be achieved by regressing on multiple video streams than previous approaches that were designed to operate on only a single view. Furthermore, we evaluate possible camera placements and propose a novel camera configuration that allows to mount cameras outside the field of view of the actor, which is very beneficial as the cameras are then less of a distraction for the actor and allow for an unobstructed line of sight to the director and other actors. Our new real‐time facial capture approach has immediate application in on‐set virtual production, in particular with the ever‐growing demand for motion‐captured facial animation in visual effects and video games.  相似文献   

7.
Two key problems for camera networks that observe wide areas with many distributed cameras are self-localization and camera identification. Although there are many methods for localizing the cameras, one of the easiest and most desired methods is to estimate camera positions by having the cameras observe each other; hence the term self-localization. If the cameras have a wide viewing field, e.g. an omnidirectional camera, and can observe each other, baseline distances between pairs of cameras and relative locations can be determined. However, if the projection of a camera is relatively small on the image of other cameras and is not readily visible, the baselines cannot be detected. In this paper, a method is proposed to determine the baselines and relative locations of these invisible cameras. The method consists of two processes executed simultaneously: (a) to statistically detect the baselines among the cameras, and (b) to localize the cameras by using information from (a) and propagating triangle constraints. Process (b) works for the localization in the case where the cameras are observed each other, and it does not require complete observation among the cameras. However, if many cameras cannot be observed each other because of the poor image resolution, it dose not work. The baseline detection by process (a) solves the problem. This methodology is described in detail and results are provided for several scenarios.  相似文献   

8.
Person re-identification across multiple cameras is difficult due to viewpoint and illumination variations. Most traditional research focuses on developing invariant features that are unaffected by these variations. However, thus far, there has been no feature developed that is completely invariant, and it is possible that a fully invariant feature may not exist. Therefore, we do not seek to develop these ideal features in this paper. We instead propose a framework for learning a gallery of persons who appear in the camera network frequently. The gallery contains appearance models of these persons from each camera and viewpoint. Given the camera identity, viewpoint identity, person identity, the model is decided. Since these appearance models are specific to each camera and viewpoint, the problems of viewpoint variations and illumination variations between cameras are explicitly solved, and re-identification becomes a ranking problem. Experiments demonstrate that our framework provides significant improvement in addressing the re-identification problem.  相似文献   

9.
In order to monitor sufficiently large areas of interest for surveillance or any event detection, we need to look beyond stationary cameras and employ an automatically configurable network of nonoverlapping cameras. These cameras need not have an overlapping field of view and should be allowed to move freely in space. Moreover, features like zooming in/out, readily available in security cameras these days, should be exploited in order to focus on any particular area of interest if needed. In this paper, a practical framework is proposed to self-calibrate dynamically moving and zooming cameras and determine their absolute and relative orientations, assuming that their relative position is known. A global linear solution is presented for self-calibrating each zooming/focusing camera in the network. After self-calibration, it is shown that only one automatically computed vanishing point and a line lying on any plane orthogonal to the vertical direction is sufficient to infer the dynamic network configuration. Our method generalizes previous work which considers restricted camera motions. Using minimal assumptions, we are able to successfully demonstrate promising results on synthetic, as well as on real data.  相似文献   

10.
Helmholtz Stereopsis is a powerful technique for reconstruction of scenes with arbitrary reflectance properties. However, previous formulations have been limited to static objects due to the requirement to sequentially capture reciprocal image pairs (i.e. two images with the camera and light source positions mutually interchanged). In this paper, we propose colour Helmholtz Stereopsis—a novel framework for Helmholtz Stereopsis based on wavelength multiplexing. To address the new set of challenges introduced by multispectral data acquisition, the proposed Colour Helmholtz Stereopsis pipeline uniquely combines a tailored photometric calibration for multiple camera/light source pairs, a novel procedure for spatio-temporal surface chromaticity calibration and a state-of-the-art Bayesian formulation necessary for accurate reconstruction from a minimal number of reciprocal pairs. In this framework, reflectance is spatially unconstrained both in terms of its chromaticity and the directional component dependent on the illumination incidence and viewing angles. The proposed approach for the first time enables modelling of dynamic scenes with arbitrary unknown and spatially varying reflectance using a practical acquisition set-up consisting of a small number of cameras and light sources. Experimental results demonstrate the accuracy and flexibility of the technique on a variety of static and dynamic scenes with arbitrary unknown BRDF and chromaticity ranging from uniform to arbitrary and spatially varying.  相似文献   

11.
Hybrid central catadioptric and perspective cameras are desired in practice, because the hybrid camera system can capture large field of view as well as high-resolution images. However, the calibration of the system is challenging due to heavy distortions in catadioptric cameras. In addition, previous calibration methods are only suitable for the camera system consisting of perspective cameras and catadioptric cameras with only parabolic mirrors, in which priors about the intrinsic parameters of perspective cameras are required. In this work, we provide a new approach to handle the problems. We show that if the hybrid camera system consists of at least two central catadioptric and one perspective cameras, both the intrinsic and extrinsic parameters of the system can be calibrated linearly without priors about intrinsic parameters of the perspective cameras, and the supported central catadioptric cameras of our method can be more generic. In this work, an approximated polynomial model is derived and used for rectification of catadioptric image. Firstly, with the epipolar geometry between the perspective and rectified catadioptric images, the distortion parameters of the polynomial model can be estimated linearly. Then a new method is proposed to estimate the intrinsic parameters of a central catadioptric camera with the parameters in the polynomial model, and hence the catadioptric cameras can be calibrated. Finally, a linear self-calibration method for the hybrid system is given with the calibrated catadioptric cameras. The main advantage of our method is that it cannot only calibrate both the intrinsic and extrinsic parameters of the hybrid camera system, but also simplify a traditional nonlinear self-calibration of perspective cameras to a linear process. Experiments show that our proposed method is robust and reliable.  相似文献   

12.
We present a method for active self-calibration of multi-camera systems consisting of pan-tilt zoom cameras. The main focus of this work is on extrinsic self-calibration using active camera control. Our novel probabilistic approach avoids multi-image point correspondences as far as possible. This allows an implicit treatment of ambiguities. The relative poses are optimized by actively rotating and zooming each camera pair in a way that significantly simplifies the problem of extracting correct point correspondences. In a final step we calibrate the entire system using a minimal number of relative poses. The selection of relative poses is based on their uncertainty. We exploit active camera control to estimate consistent translation scales for triplets of cameras. This allows us to estimate missing relative poses in the camera triplets. In addition to this active extrinsic self-calibration we present an extended method for the rotational intrinsic self-calibration of a camera that exploits the rotation knowledge provided by the camera’s pan-tilt unit to robustly estimate the intrinsic camera parameters for different zoom steps as well as the rotation between pan-tilt unit and camera. Quantitative experiments on real data demonstrate the robustness and high accuracy of our approach. We achieve a median reprojection error of $0.95$ pixel.  相似文献   

13.
设计了一套基于STC12单片机的太阳能LED路灯控制系统,系统采用变步长的电导增量法跟踪太阳能电池板最大功率点,充分利用太阳能电池板的能量,对铅酸蓄电池充电。同时实时监测铅酸蓄电池的电压防止蓄电池过充、过放等现象;对LED路灯采用多段式的恒流控制,通过环境照度的监测控制LED路灯在不同电流强度下工作,以增强LED路灯的使用寿命,实现节约用电的目的。  相似文献   

14.
We describe a global illumination method combining two well known techniques: photon mapping and irradiance caching. The photon mapping method has the advantage of being view independent but requires a costly additional rendering pass, called final gathering. As for irradiance caching, it is view‐dependent, irradiance is only computed and cached on surfaces of the scene as viewed by a single camera. To compute records covering the entire scene, the irradiance caching method has to be run for many cameras, which takes a long time and is a tedious task since the user has to place the needed cameras manually. Our method exploits the advantages of these two methods and avoids any intervention of the user. It computes a refined, view‐independent irradiance cache from a photon map. The global illumination solution is then rendered interactively using radiance cache splatting.  相似文献   

15.
Time-of-flight (TOF) cameras are sensors that can measure the depths of scene points, by illuminating the scene with a controlled laser or LED source and then analyzing the reflected light. In this paper, we will first describe the underlying measurement principles of time-of-flight cameras, including: (1) pulsed-light cameras, which measure directly the time taken for a light pulse to travel from the device to the object and back again, and (2) continuous-wave-modulated light cameras, which measure the phase difference between the emitted and received signals, and hence obtain the travel time indirectly. We review the main existing designs, including prototypes as well as commercially available devices. We also review the relevant camera calibration principles, and how they are applied to TOF devices. Finally, we discuss the benefits and challenges of combined TOF and color camera systems.  相似文献   

16.
An objects detection algorithm for color dynamic images from two cameras is proposed for a surveillance system under low illumination. It provides automatic calculation of a fuzzy corresponding map and color similarity for lower luminance conditions, which detects little chromatic regions in CCD camera images under lower illumination and presents regions with a possibility of occlusion situation. Experimental detection results for two dynamic images from real surveillance cameras in a downtown area in Japan under low luminance conditions show that the proposed algorithm has 15% improved accuracy compared with the independent detection algorithm in the same false alarm rate, which occlusion regions are correctly presented. Moreover, implementability for severe surveillance situation is discussed. The proposed algorithm is being considered for use in a low cost surveillance system at a relatively poor security downtown (shopping mall) area in Japan.  相似文献   

17.
To estimate appearance parameters, traditional SVBRDF acquisition methods require multiple input images to be captured with various angles of light and camera, followed by a post-processing step. For this reason, subjects have been limited to static scenes, or a multiview system is required to capture dynamic objects. In this paper, we propose a simultaneous acquisition method of SVBRDF and shape allowing us to capture the material appearance of deformable objects in motion using a single RGBD camera. To do so, we progressively integrate photometric samples of surfaces in motion in a volumetric data structure with a deformation graph. Then, building upon recent advances of fusion-based methods, we estimate SVBRDF parameters in motion. We make use of a conventional RGBD camera that consists of the colour and infrared cameras with active infrared illumination. The colour camera is used for capturing diffuse properties, and the infrared camera-illumination module is employed for estimating specular properties by means of active illumination. Our joint optimization yields complete material appearance parameters. We demonstrate the effectiveness of our method with extensive evaluation on both synthetic and real data that include various deformable objects of specular and diffuse appearance.  相似文献   

18.
Estimating people’s head pose is an important problem, for which many solutions have been proposed. Most existing solutions are based on the use of a single camera and assume that the head is confined in a relatively small region of space. If we need to estimate unintrusively the head pose of persons in a large environment, however, we need to use several cameras to cover the monitored area. In this work, we propose a novel solution to the multi-camera head pose estimation problem that exploits the additional amount of information that provides multi-camera configurations. Our approach uses the probability estimates produced by multi-class support vector machines to calculate the probability distribution of the head pose. The distributions produced by the cameras are fused, resulting in a more precise estimate than the one provided individually. We report experimental results that confirm that the fused distribution provides higher accuracy than the individual classifiers and a high robustness against errors.  相似文献   

19.
We propose a novel framework called transient imaging for image formation and scene understanding through impulse illumination and time images. Using time-of-flight cameras and multi-path analysis of global light transport, we pioneer new algorithms and systems for scene understanding through time images. We demonstrate that our proposed transient imaging framework allows us to accomplish tasks that are well beyond the reach of existing imaging technology. For example, one can infer the geometry of not only the visible but also the hidden parts of a scene, enabling us to look around corners. Traditional cameras estimate intensity per pixel I(x,y). Our transient imaging camera captures a 3D time-image I(x,y,t) for each pixel and uses an ultra-short pulse laser for illumination. Emerging technologies are supporting cameras with a temporal-profile per pixel at picosecond resolution, allowing us to capture an ultra-high speed time-image. This time-image contains the time profile of irradiance incident at a sensor pixel. We experimentally corroborated our theory with free space hardware experiments using a femtosecond laser and a picosecond accurate sensing device. The ability to infer the structure of hidden scene elements, unobservable by both the camera and illumination source, will create a range of new computer vision opportunities.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号