首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present an approach that significantly enhances the capabilities of traditional image mosaicking. The key observation is that as a camera moves, it senses each scene point multiple times. We rigidly attach to the camera an optical filter with spatially varying properties, so that multiple measurements are obtained for each scene point under different optical settings. Fusing the data captured in the multiple images yields an image mosaic that includes additional information about the scene. We refer to this approach as generalized mosaicing. In this paper we show that this approach can significantly extend the optical dynamic range of any given imaging system by exploiting vignetting effects. We derive the optimal vignetting configuration and implement it using an external filter with spatially varying transmittance. We also derive efficient scene sampling conditions as well as ways to self calibrate the vignetting effects. Maximum likelihood is used for image registration and fusion. In an experiment we mounted such a filter on a standard 8-bit video camera, to obtain an image panorama with dynamic range comparable to imaging with a 16-bit camera.  相似文献   

2.
We present an approach to image the polarization state of object points in a wide field of view, while enhancing the radiometric dynamic range of imaging systems by generalizing image mosaicing. The approach is biologically inspired, as it emulates spatially varying polarization sensitivity of some animals. In our method, a spatially varying polarization and attenuation filter is rigidly attached to a camera. As the system moves, it senses each scene point multiple times, each time filtering it through a different filter polarizing angle, polarizance, and transmittance. Polarization is an additional dimension of the generalized mosaicing paradigm, which has recently yielded high dynamic range images and multispectral images in a wide field of view using other kinds of filters. The image acquisition is as easy as in traditional image mosaics. The computational algorithm can easily handle nonideal polarization filters (partial polarizers), variable exposures, and saturation in a single framework. The resulting mosaic represents the polarization state at each scene point. Using data acquired by this method, we demonstrate attenuation and enhancement of specular reflections and semi reflection separation in an image mosaic.  相似文献   

3.
过去几年里,开展了大量的多光谱图像采集领域的研究工作,但主要采用的是轮盘变换式的多光谱图像采集系统。近来,一种新的多光谱图像采集方法被提出,基于阵列相机延伸范围来形成多个通道,因此引入多光谱阵列相机的概念。本文构造基于阵列相机的多光谱图像采集系统。该阵列相机采用12个大恒DH-HV1300FM型相机,11个镜头装有波长不同的滤光片。本文结合阵列相机多通道数的优势,提出一种光谱反射率重建算法,能够可靠并有效地重建场景的光谱反射率。仿真实验结果验证了该系统的有效性。  相似文献   

4.
《Image and vision computing》2002,20(9-10):751-759
We describe the construction of accurate panoramic mosaics from multiple images taken with a rotating camera, or alternatively of a planar scene. The novelty of the approach lies in (i) the transfer of photogrammetric bundle adjustment techniques to mosaicing; (ii) a new representation of image line measurements enabling the use of lines in camera self-calibration, including computation of the radial and other non-linear distortion; and (iii) the application of the variable state dimension filter to obtain efficient sequential updates of the mosaic as each image is added.We demonstrate that our method achieves better results than the alternative approach of optimising over pairs of images.  相似文献   

5.
We present a novel technique for capturing spatially or temporally resolved light probe sequences, and using them for image based lighting. For this purpose we have designed and built a real-time light probe, a catadioptric imaging system that can capture the full dynamic range of the lighting incident at each point in space at video frame rates, while being moved through a scene. The real-time light probe uses a digital imaging system which we have programmed to capture high quality, photometrically accurate color images of 512×512 pixels with a dynamic range of 10000000:1 at 25 frames per second. By tracking the position and orientation of the light probe, it is possible to transform each light probe into a common frame of reference in world coordinates, and map each point and direction in space along the path of motion to a particular frame and pixel in the light probe sequence. We demonstrate our technique by rendering synthetic objects illuminated by complex real world lighting, first by using traditional image based lighting methods and temporally varying light probe illumination, and second an extension to handle spatially varying lighting conditions across large objects and object motion along an extended path.  相似文献   

6.
Split Aperture Imaging for High Dynamic Range   总被引:1,自引:0,他引:1  
Most imaging sensors have limited dynamic range and hence are sensitive to only a part of the illumination range present in a natural scene. The dynamic range can be improved by acquiring multiple images of the same scene under different exposure settings and then combining them. In this paper, we describe a camera design for simultaneously acquiring multiple images. The cross-section of the incoming beam from a scene point is partitioned into as many parts as the required number of images. This is done by splitting the aperture into multiple parts and directing the beam exiting from each in a different direction using an assembly of mirrors. A sensor is placed in the path of each beam and exposure of each sensor is controlled either by appropriately setting its exposure parameter, or by splitting the incoming beam unevenly. The resulting multiple exposure images are used to construct a high dynamic range image. We have implemented a video-rate camera based on this design and the results obtained are presented.  相似文献   

7.
We present a sequential mosaicing algorithm for a calibrated rotating camera which can for the first time build drift-free, consistent spherical mosaics in real-time, automatically and seamlessly even when previously viewed parts of the scene are re-visited. Our mosaic is composed of elastic triangular tiles attached to a backbone map of feature directions over the unit sphere built using a sequential EKF SLAM (Extend Kalman Filter Simultaneous Localization And Mapping) approach. This method represents a significant advance on previous mosaicing techniques which either require off-line optimization or which work in real-time but use local alignment of nearby images and ultimately drift. We demonstrate the system’s real-time performance with real-time mosaicing results from sequences with 360 degrees pan. The system shows good global mosaicing ability despite the challenging conditions: hand-held simple low-resolution webcam, varying natural outdoor illumination, and people moving in the scene.  相似文献   

8.
We present a simple and effective technique for absolute colorimetric camera characterization, invariant to changes in exposure/aperture and scene irradiance, suitable in a wide range of applications including image‐based reflectance measurements, spectral pre‐filtering and spectral upsampling for rendering, to improve colour accuracy in high dynamic range imaging. Our method requires a limited number of acquisitions, an off‐the‐shelf target and a commonly available projector, used as a controllable light source, other than the reflected radiance to be known. The characterized camera can be effectively used as a 2D tele‐colorimeter, providing the user with an accurate estimate of the distribution of luminance and chromaticity in a scene, without requiring explicit knowledge of the incident lighting power spectra. We validate the approach by comparing our estimated absolute tristimulus values (XYZ data in ) with the measurements of a professional 2D tele‐colorimeter, for a set of scenes with complex geometry, spatially varying reflectance and light sources with very different spectral power distribution.  相似文献   

9.
Vision in scattering media is important but challenging. Images suffer from poor visibility due to backscattering and attenuation. Most prior methods for scene recovery use active illumination scanners (structured and gated), which can be slow and cumbersome, while natural illumination is inapplicable to dark environments. The current paper addresses the need for a non-scanning recovery method, that uses active scene irradiance. We study the formation of images under widefield artificial illumination. Based on the formation model, the paper presents an approach for recovering the object signal. It also yields rough information about the 3D scene structure. The approach can work with compact, simple hardware, having active widefield, polychromatic polarized illumination. The camera is fitted with a polarization analyzer. Two frames of the scene are taken, with different states of the analyzer or polarizer. A recovery algorithm follows the acquisition. It allows both the backscatter and the object reflection to be partially polarized. It thus unifies and generalizes prior polarization-based methods, which had assumed exclusive polarization of either of these components. The approach is limited to an effective range, due to image noise and illumination falloff. Thus, the limits and noise sensitivity are analyzed. We demonstrate the approach in underwater field experiments.  相似文献   

10.
We present two practical methods for measurement of spectral skin reflectance suited for live subjects, and drive a spectral BSSRDF model with appropriate complexity to match skin appearance in photographs, including human faces. Our primary measurement method employs illuminating a subject with two complementary uniform spectral illumination conditions using a multispectral LED sphere to estimate spatially varying parameters of chromophore concentrations including melanin and hemoglobin concentration, melanin blend-type fraction, and epidermal hemoglobin fraction. We demonstrate that our proposed complementary measurements enable higher-quality estimate of chromophores than those obtained using standard broadband illumination, while being suitable for integration with multiview facial capture using regular color cameras. Besides novel optimal measurements under controlled illumination, we also demonstrate how to adapt practical skin patch measurements using a hand-held dermatological skin measurement device, a Miravex Antera 3D camera, for skin appearance reconstruction and rendering. Furthermore, we introduce a novel approach for parameter estimation given the measurements using neural networks which is significantly faster than a lookup table search and avoids parameter quantization. We demonstrate high quality matches of skin appearance with photographs for a variety of skin types with our proposed practical measurement procedures, including photorealistic spectral reproduction and renderings of facial appearance.  相似文献   

11.
We address the problem of jointly estimating the scene illumination, the radiometric camera calibration and the reflectance properties of an object using a set of images from a community photo collection. The highly ill-posed nature of this problem is circumvented by using appropriate representations of illumination, an empirical model for the nonlinear function that relates image irradiance with intensity values and additional assumptions on the surface reflectance properties. Using a 3D model recovered from an unstructured set of images, we estimate the coefficients that represent the illumination for each image using a frequency framework. For each image, we also compute the corresponding camera response function. Additionally, we calculate a simple model for the reflectance properties of the 3D model. A robust non-linear optimization is proposed exploiting the high sparsity present in the problem.  相似文献   

12.
针对传统三通道RGB相机在光源光谱已知条件下不能完全恢复物体表面光谱反射率的缺点,本文构造一套多光谱成像阵列相机系统。该阵列相机采用12个大恒DH-HV1300FM型相机,且11个镜头装有波长不同的滤光片。本文结合阵列相机多通道数的优势,提出一种MSIS-GOC(Multi-spectral Imaging System based on Group of Camera)算法,能够可靠并有效地重建场景的光谱反射率。仿真实验结果分析验证了该系统的有效性。  相似文献   

13.
This work proposes a method of camera self-calibration having varying intrinsic parameters from a sequence of images of an unknown 3D object. The projection of two points of the 3D scene in the image planes is used with fundamental matrices to determine the projection matrices. The present approach is based on the formulation of a nonlinear cost function from the determination of a relationship between two points of the scene and their projections in the image planes. The resolution of this function enables us to estimate the intrinsic parameters of different cameras. The strong point of the present approach is clearly seen in the minimization of the three constraints of a self-calibration system (a pair of images, 3D scene, any camera): The use of a single pair of images provides fewer equations, which minimizes the execution time of the program, the use of a 3D scene reduces the planarity constraints, and the use of any camera eliminates the constraints of cameras having constant parameters. The experiment results on synthetic and real data are presented to demonstrate the performance of the present approach in terms of accuracy, simplicity, stability, and convergence.  相似文献   

14.
Estimation of scene illumination from a single image or an image sequence has been widely studied in computer vision. The approach presented in this paper, introduces two new issues: (1) illumination classification is performed rather than illumination estimation; (2) an object-based approach is used for illumination evaluation. Thus, pixels associated with an object are considered in the illumination estimation process using the object's spectral characteristics. Simulation and real image experiments, show that the object-based approach indeed improves performance over standard illumination classification.  相似文献   

15.
This paper introduces a novel camera attachment for measuring the illumination color spatially in the scene. The illumination color is then used to transform color appearance in the image into that under white light.The main idea is that the scene inter-reflection through a reference camera-attached surface Nose can, under some conditions, represent the illumination color directly. The illumination measurement principle relies on the satisfaction of the gray world assumption in a local scene area or the appearance of highlights, from dielectric surfaces. Scene inter-reflections are strongly blurred due to optical dispersion on the nose surface and defocusing of the nose surface image. Blurring smoothes the intense highlights and it thus becomes possible to measure the nose inter-reflection under conditions in which intensity variation in the main image would exceed the sensor dynamic range.We designed a nose surface to reflect a blurred scene version into a small image section, which is interpreted as a spatial illumination image. The nose image is then mapped to the main image for adjusting every pixel color. Experimental results showed that the nose inter-reflection color is a good measure of illumination color when the model assumptions are satisfied. The nose method performance, operating on real images, is presented and compared with the Retinex and the scene-inserted white patch methods.  相似文献   

16.
The majority of visual simultaneous localization and mapping (SLAM) approaches consider feature correspondences as an input to the joint process of estimating the camera pose and the scene structure. In this paper, we propose a new approach for simultaneously obtaining the correspondences, the camera pose, the scene structure, and the illumination changes, all directly using image intensities as observations. Exploitation of all possible image information leads to more accurate estimates and avoids the inherent difficulties of reliably associating features. We also show here that, in this case, structural constraints can be enforced within the procedure as well (instead of a posteriori), namely the cheirality, the rigidity, and those related to the lighting variations. We formulate the visual SLAM problem as a nonlinear image alignment task. The proposed parameters to perform this task are optimally computed by an efficient second-order approximation method for fast processing and avoidance of irrelevant minima. Furthermore, a new solution to the visual SLAM initialization problem is described whereby no assumptions are made about either the scene or the camera motion. Experimental results are provided for a variety of scenes, including urban and outdoor ones, under general camera motion and different types of perturbations.   相似文献   

17.
We present two novel mobile reflectometry approaches for acquiring detailed spatially varying isotropic surface reflectance and mesostructure of a planar material sample using commodity mobile devices. The first approach relies on the integrated camera and flash pair present on typical mobile devices to support free‐form handheld acquisition of spatially varying rough specular material samples. The second approach, suited for highly specular samples, uses the LCD panel to illuminate the sample with polarized second‐order gradient illumination. To address the limited overlap of the front facing camera's view and the LCD illumination (and thus limited sample size), we propose a novel appearance transfer method that combines controlled reflectance measurement of a small exemplar section with uncontrolled reflectance measurements of the full sample under natural lighting. Finally, we introduce a novel surface detail enhancement method that adds fine scale surface mesostructure from close‐up observations under uncontrolled natural lighting. We demonstrate the accuracy and versatility of the proposed mobile reflectometry methods on a wide variety of spatially varying materials.  相似文献   

18.
《Advanced Robotics》2013,27(3-4):327-348
We present a mobile robot localization method using only a stereo camera. Vision-based localization in outdoor environments is a challenging issue because of extreme changes in illumination. To cope with varying illumination conditions, we use two-dimensional occupancy grid maps generated from three-dimensional point clouds obtained by a stereo camera. Furthermore, we incorporate salient line segments extracted from the ground into the grid maps. The grid maps are not significantly affected by illumination conditions because occupancy information and salient line segments can be robustly obtained. On the grid maps, a robot's poses are estimated using a particle filter that combines visual odometry and map matching. We use edge-point-based stereo simultaneous localization and mapping to obtain simultaneously occupancy information and robot ego-motion estimation. We tested our method under various illumination and weather conditions, including sunny and rainy days. The experimental results showed the effectiveness and robustness of the proposed method. Our method enables localization under extremely poor illumination conditions, which are challenging for even existing state-of-the-art methods.  相似文献   

19.
This paper presents a novel method for estimating specular roughness and tangent vectors, per surface point, from polarized second order spherical gradient illumination patterns. We demonstrate that for isotropic BRDFs, only three second order spherical gradients are sufficient to robustly estimate spatially varying specular roughness. For anisotropic BRDFs, an additional two measurements yield specular roughness and tangent vectors per surface point. We verify our approach with different illumination configurations which project both discrete and continuous fields of gradient illumination. Our technique provides a direct estimate of the per-pixel specular roughness and thus does not require off-line numerical optimization that is typical for the measure-and-fit approach to classical BRDF modeling.  相似文献   

20.
Underexposed, low-light, images are acquired when scene illumination is insufficient for a given camera. Camera limitation originates in the high chance of producing motion blurred images due to shaky hands. In this paper we suggest to actively use underexposing as a measure to prevent motion blurred images to appear and propose a novel color transfer as a method for low light image amplification. The proposed solution envisages a dual acquisition, containing a normally exposed, possibly blurred image and an underexposed/low-light, but sharp one. Good colors are learned from the normal exposed image and transferred to the low light one using a framework matching solution. To ensure that the transfer is spatially consistent, the images are divided into luminance perceptual consistent patches called frameworks and the optimal mapping is piece-wise approximated. The two image may differ by colors and subject to improve the robustness of the spatial matching, we added supplementary extreme channels. The proposed method shows robust results from both an objective and a subjective point of view.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号