首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Hyperspectral cameras sample many different spectral bands at each pixel, enabling advanced detection and classification algorithms. However, their limited spatial resolution and the need to measure the camera motion to create hyperspectral images makes them unsuitable for nonsmooth moving platforms such as unmanned aerial vehicles (UAVs). We present a procedure to build hyperspectral images from line sensor data without camera motion information or extraneous sensors. Our approach relies on an accompanying conventional camera to exploit the homographies between images for mosaic construction. We provide experimental results from a low‐altitude UAV, achieving high‐resolution spectroscopy with our system.  相似文献   

2.
Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras.  相似文献   

3.
Yang  Zhao  Zhao  Yang  Hu  Xiao  Yin  Yi  Zhou  Lihua  Tao  Dapeng 《Multimedia Tools and Applications》2019,78(9):11983-12006

The surround view camera system is an emerging driving assistant technology that can assist drivers in parking by providing top-down view of surrounding situations. Such a system usually consists of four wide-angle or fish-eye cameras that mounted around the vehicle, and a bird-eye view is synthesized from images of these cameras. Commonly there are two fundamental problems for the surround view synthesis, geometric alignment and image synthesis. Geometric alignment performs fish-eye calibration and computes the image perspective transformation between the bird-eye view and images from the surrounding cameras. Image synthesis technique dedicates to seamless stitch between adjacent views and color balancing. In this paper, we propose a flexible central-around coordinate mapping (CACM) model for vehicle surround view synthesis. The CACM model calculates perspective transformation between a top-view central camera coordinate and the around camera coordinates by a marker point based method. With the transformation matrices, we could generate the pixel point mapping relationship between the bird-eye view and images of the surrounding cameras. After geometric alignment, an image fusion method based on distance weighting is adopted for seamless stitch, and an effective overlapping region brightness optimization method is proposed for color balancing. Both the seamless stitch and color balancing can be easily operated by using two types of weight coefficient under the framework of the CACM model. Experimental results show that the proposed approaches could provide a high-performance surround view camera system.

  相似文献   

4.
In 3D reconstruction, the recovery of the calibration parameters of the cameras is paramount since it provides metric information about the observed scene, e.g., measures of angles and ratios of distances. Autocalibration enables the estimation of the camera parameters without using a calibration device, but by enforcing simple constraints on the camera parameters. In the absence of information about the internal camera parameters such as the focal length and the principal point, the knowledge of the camera pixel shape is usually the only available constraint. Given a projective reconstruction of a rigid scene, we address the problem of the autocalibration of a minimal set of cameras with known pixel shape and otherwise arbitrarily varying intrinsic and extrinsic parameters. We propose an algorithm that only requires 5 cameras (the theoretical minimum), thus halving the number of cameras required by previous algorithms based on the same constraint. To this purpose, we introduce as our basic geometric tool the six-line conic variety (SLCV), consisting in the set of planes intersecting six given lines of 3D space in points of a conic. We show that the set of solutions of the Euclidean upgrading problem for three cameras with known pixel shape can be parameterized in a computationally efficient way. This parameterization is then used to solve autocalibration from five or more cameras, reducing the three-dimensional search space to a two-dimensional one. We provide experiments with real images showing the good performance of the technique.  相似文献   

5.
Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that a paradigm shift is needed. We introduce the problem of event-based multi-view stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our EMVS solution elegantly exploits two inherent properties of an event camera: (1) its ability to respond to scene edges—which naturally provide semi-dense geometric information without any pre-processing operation—and (2) the fact that it provides continuous measurements as the sensor moves. Despite its simplicity (it can be implemented in a few lines of code), our algorithm is able to produce accurate, semi-dense depth maps, without requiring any explicit data association or intensity estimation. We successfully validate our method on both synthetic and real data. Our method is computationally very efficient and runs in real-time on a CPU.  相似文献   

6.
7.
Stereo-pair images obtained from two cameras can be used to compute three-dimensional (3D) world coordinates of a point using triangulation. However, to apply this method, camera calibration parameters for each camera need to be experimentally obtained. Camera calibration is a rigorous experimental procedure in which typically 12 parameters are to be evaluated for each camera. The general camera model is often such that the system becomes nonlinear and requires good initial estimates to converge to a solution. We propose that, for stereo vision applications in which real-world coordinates are to be evaluated, artificial neural networks be used to train the system such that the need for camera calibration is eliminated. The training set for our neural network consists of a variety of stereo-pair images and corresponding 3D world coordinates. We present the results obtained on our prototype mobile robot that employs two cameras as its sole sensors and navigates through simple regular obstacles in a high-contrast environment. We observe that the percentage errors obtained from our set-up are comparable with those obtained through standard camera calibration techniques and that the system is accurate enough for most machine-vision applications.  相似文献   

8.
Automatic video segmentation plays an important role in real-time MPEG-4 encoding systems. Several video segmentation algorithms have been proposed; however, most of them are not suitable for real-time applications because of high computation load and many parameters needed to be set in advance. This paper presents a fast video segmentation algorithm for MPEG-4 camera systems. With change detection and background registration techniques, this algorithm can give satisfying segmentation results with low computation load. The processing speed of 40 QCIF frames per second can be achieved on a personal computer with an 800 MHz Pentium-III processor. Besides, it has shadow cancellation mode, which can deal with light changing effect and shadow effect. A fast global motion compensation algorithm is also included in this algorithm to make it applicable in slight moving camera situations. Furthermore, the required parameters can be decided automatically, which can enhance the proposed algorithm to have adaptive threshold ability. It can be integrated into MPEG-4 videophone systems and digital cameras.  相似文献   

9.
刘栋栋 《微型电脑应用》2012,28(3):43-45,68,69
设计了一个基于全景视觉的多摄像机监控网络。全景相机视野广,可以实现大范围的目标检测与跟踪。云台摄像机视角具有一定的自由度,可以捕捉目标的高分辨率图像。将全景相机与云台相机相互配合,通过多传感器的数据融合,分层次的跟踪算法及多相机调度算法,实现了大范围的多个运动目标的检测与跟踪,并能捕获目标的清晰图像。实验验证了该系统的有效性和合理性。  相似文献   

10.
The problem of estimating and predicting position and orientation (pose) of a camera is approached by fusing measurements from inertial sensors (accelerometers and rate gyroscopes) and vision. The sensor fusion approach described in this contribution is based on non-linear filtering of these complementary sensors. This way, accurate and robust pose estimates are available for the primary purpose of augmented reality applications, but with the secondary effect of reducing computation time and improving the performance in vision processing. A real-time implementation of a multi-rate extended Kalman filter is described, using a dynamic model with 22 states, where 12.5 Hz correspondences from vision and 100 Hz inertial measurements are processed. An example where an industrial robot is used to move the sensor unit is presented. The advantage with this configuration is that it provides ground truth for the pose, allowing for objective performance evaluation. The results show that we obtain an absolute accuracy of 2 cm in position and 1° in orientation.  相似文献   

11.
In order for a binocular head to perform optimal 3D tracking, it should be able to verge its cameras actively, while maintaining geometric calibration. In this work we introduce a calibration update procedure, which allows a robotic head to simultaneously fixate, track, and reconstruct a moving object in real-time. The update method is based on a mapping from motor-based to image-based estimates of the camera orientations, estimated in an offline stage. Following this, a fast online procedure is presented to update the calibration of an active binocular camera pair. The proposed approach is ideal for active vision applications because no image-processing is needed at runtime for the scope of calibrating the system or for maintaining the calibration parameters during camera vergence. We show that this homography-based technique allows an active binocular robot to fixate and track an object, whilst performing 3D reconstruction concurrently in real-time.  相似文献   

12.
Wireless multimedia sensor networks (WMSNs) are interconnected devices that allow retrieving video and audio streams, still images, and scalar data from the environment. In a densely deployed WMSN, there exists correlation among the visual information observed by cameras with overlapped field of views. This paper proposes a novel spatial correlation model for visual information in WMSNs. By studying the sensing model and deployments of cameras, a spatial correlation function is derived to describe the correlation characteristics of visual information observed by cameras with overlapped field of views. The joint effect of multiple correlated cameras is also studied. An entropy-based analytical framework is developed to measure the amount of visual information provided by multiple cameras in the network. Furthermore, according to the proposed correlation function and entropy-based framework, a correlation-based camera selection algorithm is designed. Experimental results show that the proposed spatial correlation function can model the correlation characteristics of visual information in WMSNs through low computation and communication costs. Further simulations show that, given a distortion bound at the sink, the correlation-based camera selection algorithm requires fewer cameras to report to the sink than the random selection algorithm.  相似文献   

13.
Distributed embedded smart cameras for surveillance applications   总被引:3,自引:0,他引:3  
Recent advances in computing, communication, and sensor technology are pushing the development of many new applications. This trend is especially evident in pervasive computing, sensor networks, and embedded systems. Smart cameras, one example of this innovation, are equipped with a high-performance onboard computing and communication infrastructure, combining video sensing, processing, and communications in a single embedded device. By providing access to many views through cooperation among individual cameras, networks of embedded cameras can potentially support more complex and challenging applications - including smart rooms, surveillance, tracking, and motion analysis - than a single camera. We designed our smart camera as a fully embedded system, focusing on power consumption, QoS management, and limited resources. The camera is a scalable, embedded, high-performance, multiprocessor platform consisting of a network processor and a variable number of digital signal processors (DSPs). Using the implemented software framework, our embedded cameras offer system-level services such as dynamic load distribution and task reconfiguration. In addition, we combined several smart cameras to form a distributed embedded surveillance system that supports cooperation and communication among cameras.  相似文献   

14.
The most frequent application of unmanned aerial vehicle (UAVs) is to collect optical colour images from an area of interest. Thus, high spatial resolution colour images with high amount of signal to noise ratio (SNR) are of great importance in UAV applications. Currently, most UAVs use single sensor colour filter array (CFA) cameras for image collection, within which the Bayer-pattern sensors are the most frequently used ones. Due to the limitations of the CFAs, the quality (in terms of spatial resolution, SNR, and sharpness) of UAV colour images is not optimal. In this article, a sensor fusion solution is proposed to improve the quality of UAV imaging. In the proposed solution, a high-resolution colour (HRC) Bayer-pattern sensor is replaced by a dual camera set containing a panchromatic (Pan) sensor, with the same pixel size and a Bayer-pattern colour (or a four-band multi-spectral) sensor with larger pixel size; the resulting images of the dual camera set are then fused. The enlarged pixel size of the colour sensor provides a higher SNR for the cost of lower spatial resolution. However, the accompanied Pan sensor provides single band images with high SNR and high spatial resolution. Fusing the images of the dual camera set generates colour (or MS) images with high spatial resolution, SNR, and sharpness compensating for the major problems of the Bayer-pattern filters.

This replacement solution is initially tested in a laboratory experiment. The results of quality assessments show that the SNR is increased by 2–3 times, the sharpness is improved by around 2 times, and the spatial resolution is increased up to the level of the pan images, while the colour errors remained almost as low as the original colour images. In addition, image classification capability of the images is examined using two methods: Support Vector Machine (SVM) and Maximum Likelihood (ML). The results of image classification also confirmed around 20–40% increase in accuracy. Therefore, the proposed sensor fusion can be a good alternative for UAV colour sensors.  相似文献   


15.
Fish-eye lenses are convenient in such applications where a very wide angle of view is needed, but their use for measurement purposes has been limited by the lack of an accurate, generic, and easy-to-use calibration procedure. We hence propose a generic camera model, which is suitable for fish-eye lens cameras as well as for conventional and wide-angle lens cameras, and a calibration method for estimating the parameters of the model. The achieved level of calibration accuracy is comparable to the previously reported state-of-the-art.  相似文献   

16.
One of the leading time of flight imaging technologies for depth sensing is based on Photonic Mixer Devices (PMD). In PMD sensors each pixel samples the correlation between emitted and received light signals. Current PMD cameras compute eight correlation samples per pixel in four sequential stages to obtain depth with invariance to signal amplitude and offset variations. With motion, PMD pixels capture different depths at each stage. As a result, correlation samples are not coherent with a single depth, producing artifacts. We propose to detect and remove motion artifacts from a single frame taken by a PMD camera. The algorithm we propose is very fast, simple and can be easily included in camera hardware. We recover depth of each pixel by exploiting consistency of the correlation samples and local neighbors of the pixel. In addition, our method obtains the motion flow of occluding contours in the image from a single frame. The system has been validated in real scenes using a commercial low-cost PMD camera and high speed dynamics. In all cases our method produces accurate results and it highly reduces motion artifacts.  相似文献   

17.
In video post-production applications, camera motion analysis and alignment are important in order to ensure the geometric correctness and temporal consistency. In this paper, we trade some generality in estimating and aligning camera motion for reduced computational complexity and increased image-based nature. The main contribution is to use fundamental ratios to synchronize video sequences of distinct scenes captured by cameras undergoing similar motions. We also present a simple method to align 3D camera trajectories when the fundamental ratios are not able to match the noisy trajectories. Experimental results show that our method can accurately synchronize sequences even when the scenes are totally different and have dense depths. An application on 3D object transfer is also demonstrated.  相似文献   

18.
Multimedia presentations have become an indispensable feature of museum exhibits in recent years. Advances in technology have increased the relevance of studying digital communication using computational devices. Devices, such as multi-touch screens and cameras, are essential for natural communication, and obvious applications involve entertainment to attract users. This study focused on the use of cameras to support natural interaction of visitors during museum presentations. We first outlined a platform called the “U-Garden,” comprising a set of tools to assist application designers in developing movement-based projects that employ camera tracking. We then established a rationale with which to base the design of such presentation tools. This system supplies interactive power to natural interaction based on depth image streams, and provides tracking results to designers for producing numerous fascinating applications that appeal to more diverse interactive imaginations.  相似文献   

19.
智能系统多传感器信息融合的复杂性迫切需要开发一套合适的结构体系,目前大多数结构体系都通过融合中心对分散在不同点的多个传感器进行信息处理,而底层传感器之间缺乏必要的联系.这样导致融合中心计算和通信的负担过重而造成瓶颈,且不能使传感器之间互相启发以提高任务环境认知的效率.针对这些问题本文首先提出智能传感器的新概念,指出智能传感器须具备的5个基本能力即预测、规划、刷新、通信和同化,并在此基础上讨论了多智能传感器组成系统时的算法及信息流程.最后以主动视觉和主动触觉共同感知运动物体的位姿为例剖析了这种新思想的具体运用  相似文献   

20.
In this paper, we discuss the problem of estimating parameters of a calibration model for active pan–tilt–zoom cameras. The variation of the intrinsic parameters of each camera over its full range of zoom settings is estimated through a two step procedure. We first determine the intrinsic parameters at the camera’s lowest zoom setting very accurately by capturing an extended panorama. The camera intrinsics and radial distortion parameters are then determined at discrete steps in a monotonically increasing zoom sequence that spans the full zoom range of the camera. Our model incorporates the variation of radial distortion with camera zoom. Both calibration phases are fully automatic and do not assume any knowledge of the scene structure. High-resolution calibrated panoramic mosaics are also computed during this process. These fully calibrated panoramas are represented as multi-resolution pyramids of cube-maps. We describe a hierarchical approach for building multiple levels of detail in panoramas, by aligning hundreds of images captured within a 1–12× zoom range. Results are shown from datasets captured from two types of pan–tilt–zoom cameras placed in an uncontrolled outdoor environment. The estimated camera intrinsics model along with the cube-maps provides a calibration reference for images captured on the fly by the active pan–tilt–zoom camera under operation making our approach promising for active camera network calibration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号