首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present an approach to significantly enhance the spectral resolution of imaging systems by generalizing image mosaicing. A filter transmitting spatially varying spectral bands is rigidly attached to a camera. As the system moves, it senses each scene point multiple times, each time in a different spectral band. This is an additional dimension of the generalized mosaic paradigm, which has demonstrated yielding high radiometric dynamic range images in a wide field of view, using a spatially varying density filter. The resulting mosaic represents the spectrum at each scene point. The image acquisition is as easy as in traditional image mosaics. We derive an efficient scene sampling rate, and use a registration method that accommodates the spatially varying properties of the filter. Using the data acquired by this method, we demonstrate scene rendering under different simulated illumination spectra. We are also able to infer information about the scene illumination. The approach was tested using a standard 8-bit black/white video camera and a fixed spatially varying spectral (interference) filter.  相似文献   

2.
We present an approach to image the polarization state of object points in a wide field of view, while enhancing the radiometric dynamic range of imaging systems by generalizing image mosaicing. The approach is biologically inspired, as it emulates spatially varying polarization sensitivity of some animals. In our method, a spatially varying polarization and attenuation filter is rigidly attached to a camera. As the system moves, it senses each scene point multiple times, each time filtering it through a different filter polarizing angle, polarizance, and transmittance. Polarization is an additional dimension of the generalized mosaicing paradigm, which has recently yielded high dynamic range images and multispectral images in a wide field of view using other kinds of filters. The image acquisition is as easy as in traditional image mosaics. The computational algorithm can easily handle nonideal polarization filters (partial polarizers), variable exposures, and saturation in a single framework. The resulting mosaic represents the polarization state at each scene point. Using data acquired by this method, we demonstrate attenuation and enhancement of specular reflections and semi reflection separation in an image mosaic.  相似文献   

3.
In many computer vision systems, it is assumed that the image brightness of a point directly reflects the scene radiance of the point. However, the assumption does not hold in most cases due to nonlinear camera response function, exposure changes, and vignetting. The effects of these factors are most visible in image mosaics and textures of 3D models where colors look inconsistent and notable boundaries exist. In this paper, we propose a full radiometric calibration algorithm that includes robust estimation of the radiometric response function, exposures, and vignetting. By decoupling the effect of vignetting from the response function estimation, we approach each process in a manner that is robust to noise and outliers. We verify our algorithm with both synthetic and real data which shows significant improvement compared to existing methods. We apply our estimation results to radiometrically align images for seamless mosaics and 3D model textures. We also use our method to create high dynamic range (HDR) mosaics which are more representative of the scene than normal mosaics.  相似文献   

4.
Computational photography relies on specialized image-processing techniques to combine multiple images captured by a camera to generate a desired image of the scene. We first consider the high dynamic range (HDR) imaging problem. We can change either the exposure time or the aperture while capturing multiple images of the scene to generate an HDR image. This paper addresses the HDR imaging problem for static and dynamic scenes captured using a stationary camera under various aperture and exposure settings, when we do not have any knowledge of the camera settings. We have proposed a novel framework based on sparse representation which enables us to process images while getting rid of artifacts due to moving objects and defocus blur. We show that the proposed approach is able to produce significantly good results through dynamic object rejection and deblurring capabilities. We compare the results with other competitive approaches and discuss the relative advantages of the proposed approach.  相似文献   

5.
We present a novel technique for capturing spatially or temporally resolved light probe sequences, and using them for image based lighting. For this purpose we have designed and built a real-time light probe, a catadioptric imaging system that can capture the full dynamic range of the lighting incident at each point in space at video frame rates, while being moved through a scene. The real-time light probe uses a digital imaging system which we have programmed to capture high quality, photometrically accurate color images of 512×512 pixels with a dynamic range of 10000000:1 at 25 frames per second. By tracking the position and orientation of the light probe, it is possible to transform each light probe into a common frame of reference in world coordinates, and map each point and direction in space along the path of motion to a particular frame and pixel in the light probe sequence. We demonstrate our technique by rendering synthetic objects illuminated by complex real world lighting, first by using traditional image based lighting methods and temporally varying light probe illumination, and second an extension to handle spatially varying lighting conditions across large objects and object motion along an extended path.  相似文献   

6.
Split Aperture Imaging for High Dynamic Range   总被引:1,自引:0,他引:1  
Most imaging sensors have limited dynamic range and hence are sensitive to only a part of the illumination range present in a natural scene. The dynamic range can be improved by acquiring multiple images of the same scene under different exposure settings and then combining them. In this paper, we describe a camera design for simultaneously acquiring multiple images. The cross-section of the incoming beam from a scene point is partitioned into as many parts as the required number of images. This is done by splitting the aperture into multiple parts and directing the beam exiting from each in a different direction using an assembly of mirrors. A sensor is placed in the path of each beam and exposure of each sensor is controlled either by appropriately setting its exposure parameter, or by splitting the incoming beam unevenly. The resulting multiple exposure images are used to construct a high dynamic range image. We have implemented a video-rate camera based on this design and the results obtained are presented.  相似文献   

7.
We propose to enhance the capabilities of the human visual system by performing optical image processing directly on an observed scene. Unlike previous work which additively superimposes imagery on a scene, or completely replaces scene imagery with a manipulated version, we perform all manipulation through the use of a light modulation display to spatially filter incoming light. We demonstrate a number of perceptually motivated algorithms including contrast enhancement and reduction, object highlighting for preattentive emphasis, colour saturation, de‐saturation and de‐metamerization, as well as visual enhancement for the colour blind. A camera observing the scene guides the algorithms for on‐the‐fly processing, enabling dynamic application scenarios such as monocular scopes, eyeglasses and windshields.  相似文献   

8.
Typical high dynamic range (HDR) imaging approaches based on multiple images have difficulties in handling moving objects and camera shakes, suffering from the ghosting effect and the loss of sharpness in the output HDR image. While there exist a variety of solutions for resolving such limitations, most of the existing algorithms are susceptible to complex motions, saturation, and occlusions. In this paper, we propose an HDR imaging approach using the coded electronic shutter which can capture a scene with row‐wise varying exposures in a single image. Our approach enables a direct extension of the dynamic range of the captured image without using multiple images, by photometrically calibrating rows with different exposures. Due to the concurrent capture of multiple exposures, misalignments of moving objects are naturally avoided with significant reduction in the ghosting effect. To handle the issues with under‐/over‐exposure, noise, and blurs, we present a coherent HDR imaging process where the problems are resolved one by one at each step. Experimental results with real photographs, captured using a coded electronic shutter, demonstrate that our method produces a high quality HDR images without the ghosting and blur artifacts.  相似文献   

9.
We discuss calibration and removal of "vignetting" (radial falloff) and exposure (gain) variations from sequences of images. Even when the response curve is known, spatially varying ambiguities prevent us from recovering the vignetting, exposure, and scene radiances uniquely. However, the vignetting and exposure variations can nonetheless be removed from the images without resolving these ambiguities or the previously known scale and gamma ambiguities. Applications include panoramic image mosaics, photometry for material reconstruction, image-based rendering, and preprocessing for correlation-based vision algorithms.  相似文献   

10.
This paper introduces a novel camera attachment for measuring the illumination color spatially in the scene. The illumination color is then used to transform color appearance in the image into that under white light.The main idea is that the scene inter-reflection through a reference camera-attached surface Nose can, under some conditions, represent the illumination color directly. The illumination measurement principle relies on the satisfaction of the gray world assumption in a local scene area or the appearance of highlights, from dielectric surfaces. Scene inter-reflections are strongly blurred due to optical dispersion on the nose surface and defocusing of the nose surface image. Blurring smoothes the intense highlights and it thus becomes possible to measure the nose inter-reflection under conditions in which intensity variation in the main image would exceed the sensor dynamic range.We designed a nose surface to reflect a blurred scene version into a small image section, which is interpreted as a spatial illumination image. The nose image is then mapped to the main image for adjusting every pixel color. Experimental results showed that the nose inter-reflection color is a good measure of illumination color when the model assumptions are satisfied. The nose method performance, operating on real images, is presented and compared with the Retinex and the scene-inserted white patch methods.  相似文献   

11.
We present a simple and effective technique for absolute colorimetric camera characterization, invariant to changes in exposure/aperture and scene irradiance, suitable in a wide range of applications including image‐based reflectance measurements, spectral pre‐filtering and spectral upsampling for rendering, to improve colour accuracy in high dynamic range imaging. Our method requires a limited number of acquisitions, an off‐the‐shelf target and a commonly available projector, used as a controllable light source, other than the reflected radiance to be known. The characterized camera can be effectively used as a 2D tele‐colorimeter, providing the user with an accurate estimate of the distribution of luminance and chromaticity in a scene, without requiring explicit knowledge of the incident lighting power spectra. We validate the approach by comparing our estimated absolute tristimulus values (XYZ data in ) with the measurements of a professional 2D tele‐colorimeter, for a set of scenes with complex geometry, spatially varying reflectance and light sources with very different spectral power distribution.  相似文献   

12.
Rendering with full lens model can offer images with photorealistic lens effects, but it leads to high computational costs. This paper proposes a novel camera lens model, NeuroLens, to emulate the imaging of real camera lenses through a data‐driven approach. The mapping of image formation in a camera lens is formulated as imaging regression functions (IRFs), which map input rays to output rays. IRFs are approximated with neural networks, which compactly represent the imaging properties and support parallel evaluation on a graphics processing unit (GPU). To effectively represent spatially varying imaging properties of a camera lens, the input space spanned by incident rays is subdivided into multiple subspaces and each subspace is fitted with a separate IRF. To further raise the evaluation accuracy, a set of neural networks is trained for each IRF and the output is calculated as the average output of the set. The effectiveness of the NeuroLens is demonstrated by fitting a wide range of real camera lenses. Experimental results show that it provides higher imaging accuracy in comparison to state‐of‐the‐art camera lens models, while maintaining the high efficiency for processing camera rays.  相似文献   

13.
Current HDR acquisition techniques are based on either (i) fusing multibracketed, low dynamic range (LDR) images, (ii) modifying existing hardware and capturing different exposures simultaneously with multiple sensors, or (iii) reconstructing a single image with spatially‐varying pixel exposures. In this paper, we propose a novel algorithm to recover high‐quality HDRI images from a single, coded exposure. The proposed reconstruction method builds on recently‐introduced ideas of convolutional sparse coding (CSC); this paper demonstrates how to make CSC practical for HDR imaging. We demonstrate that the proposed algorithm achieves higher‐quality reconstructions than alternative methods, we evaluate optical coding schemes, analyze algorithmic parameters, and build a prototype coded HDR camera that demonstrates the utility of convolutional sparse HDRI coding with a custom hardware platform.  相似文献   

14.
The range of scene depths that appear focused in an image is known as the depth of field (DOF). Conventional cameras are limited by a fundamental trade-off between depth of field and signal-to-noise ratio (SNR). For a dark scene, the aperture of the lens must be opened up to maintain SNR, which causes the DOF to reduce. Also, today's cameras have DOFs that correspond to a single slab that is perpendicular to the optical axis. In this paper, we present an imaging system that enables one to control the DOF in new and powerful ways. Our approach is to vary the position and/or orientation of the image detector during the integration time of a single photograph. Even when the detector motion is very small (tens of microns), a large range of scene depths (several meters) is captured, both in and out of focus. Our prototype camera uses a micro-actuator to translate the detector along the optical axis during image integration. Using this device, we demonstrate four applications of flexible DOF. First, we describe extended DOF where a large depth range is captured with a very wide aperture (low noise) but with nearly depth-independent defocus blur. Deconvolving a captured image with a single blur kernel gives an image with extended DOF and high SNR. Next, we show the capture of images with discontinuous DOFs. For instance, near and far objects can be imaged with sharpness, while objects in between are severely blurred. Third, we show that our camera can capture images with tilted DOFs (Scheimpflug imaging) without tilting the image detector. Finally, we demonstrate how our camera can be used to realize nonplanar DOFs. We believe flexible DOF imaging can open a new creative dimension in photography and lead to new capabilities in scientific imaging, vision, and graphics.  相似文献   

15.

Dynamic range of the scene can be significantly wider than the dynamic range of an image because of limitations of A/D conversion. In such a situation, numerous details of the scene cannot be adequately shown on the image. Standard industrial digital cameras are equipped with an auto-exposure function that automatically sets both the aperture value and cameras exposure time. When measuring a scene with atypical distribution of light and dark elements, the indicated auto-exposure time may not be optimal. The aim of work was to improve, with minimal cost, the performance of standard industrial digital cameras. We propose a low complexity method for creating HDR-like image using three images captured with different exposure times. The proposed method consists of three algorithms: (1) algorithm for estimating whether the auto-exposure time is optimal, (2) algorithm which determines exposure times for two additional images (one with shorter and another with longer than auto-exposure time), and (3) algorithm for HDR-like imaging based on fusion of three previously obtained images. Method is implemented on FPGA inserted into standard industrial digital camera. Results show that the proposed approach produces high quality HDR-like scene-mapped 8-bit images with minimal computational cost. All improvements may be noticed through the performance evaluation.

  相似文献   

16.
High Dynamic Range (HDR) imaging requires one to composite multiple, differently exposed images of a scene in the irradiance domain and perform tone mapping of the generated HDR image for displaying on Low Dynamic Range (LDR) devices. In the case of dynamic scenes, standard techniques may introduce artifacts called ghosts if the scene changes are not accounted for. In this paper, we consider the blind HDR problem for dynamic scenes. We develop a novel bottom-up segmentation algorithm through superpixel grouping which enables us to detect scene changes. We then employ a piecewise patch-based compositing methodology in the gradient domain to directly generate the ghost-free LDR image of the dynamic scene. Being a blind method, the primary advantage of our approach is that we do not assume any knowledge of camera response function and exposure settings while preserving the contrast even in the non-stationary regions of the scene. We compare the results of our approach for both static and dynamic scenes with that of the state-of-the-art techniques.  相似文献   

17.
This paper presents an automated approach to recovering the true color of objects on the seafloor in images collected from multiple perspectives by an autonomous underwater vehicle (AUV) during the construction of three‐dimensional (3D) seafloor models and image mosaics. When capturing images underwater, the water column induces several effects on light that are typically negligible in air, such as color‐dependent attenuation and backscatter. AUVs must typically carry artificial lighting when operating at depths below 20‐30 m; the lighting pattern generated is usually not spatially consistent. These effects cause problems for human interpretation of images, limit the ability of using color to identify benthic biota or quantify changes over multiple dives, and confound computer‐based techniques for clustering and classification. Our approach exploits the 3D structure of the scene generated using structure‐from‐motion and photogrammetry techniques to provide basic spatial data to an underwater image formation model. Parameters that are dependent on the properties of the water column are estimated from the image data itself, rather than using fixed in situ infrastructure, such as reflectance panels or detailed data on water constitutes. The model accounts for distance‐based attenuation and backscatter, camera vignetting and the artificial lighting pattern, recovering measurements of the true color (reflectance) and thus allows us to approximate the appearance of the scene as if imaged in air and illuminated from above. Our method is validated against known color targets using imagery collected in different underwater environments by two AUVs that are routinely used as part of a benthic habitat monitoring program.  相似文献   

18.
We present an approach to jointly estimating camera motion and dense structure of a static scene in terms of depth maps from monocular image sequences in driver-assistance scenarios. At each instant of time, only two consecutive frames are processed as input data of a joint estimator that fully exploits second-order information of the corresponding optimization problem and effectively copes with the non-convexity due to both the imaging geometry and the manifold of motion parameters. Additionally, carefully designed Gaussian approximations enable probabilistic inference based on locally varying confidence and globally varying sensitivity due to the epipolar geometry, with respect to the high-dimensional depth map estimation. Embedding the resulting joint estimator in an online recursive framework achieves a pronounced spatio-temporal filtering effect and robustness. We evaluate hundreds of images taken from a car moving at speed up to 100 km/h and being part of a publicly available benchmark data set. The results compare favorably with two alternative settings: stereo based scene reconstruction and camera motion estimation in batch mode using multiple frames. They, however, require a calibrated camera pair or storage for more than two frames, which is less attractive from a technical viewpoint than the proposed monocular and recursive approach. In addition to real data, a synthetic sequence is considered which provides reliable ground truth.  相似文献   

19.
This paper proposes a new approach to image-based rendering that generates an image viewed from an arbitrary camera position and orientation by rendering optical flows extracted from reference images. To derive valid optical flows, we develop an analysis technique that improves the quality of stereo matching. Without using any special equipments such as range cameras, this technique constructs reliable optical flows from a sequence of matching results between reference images. We also derive validity conditions of optical flows and show that the obtained flows satisfy those conditions. Since environment geometry is inferred from the optical flows, we are able to generate more accurate images with this additional geometric information. Our approach makes it possible to combine an image rendered from optical flows with an image generated by a conventional rendering technique through a simple Z-buffer algorithm.  相似文献   

20.
散景效果的真实感绘制   总被引:2,自引:1,他引:2  
针对现有方法绘制的散景效果真实感较差的问题,提出一种基于几何光学理论的散景效果真实感绘制方法.该方法以光线传播的折射定律为基础,利用序列光线追踪方法对相机镜头的光学成像特性进行精确建模;对相机镜头的内部结构进行精确模拟,包括孔径光阑和渐晕光阑,以绘制出由孔径形状和渐晕共同作用的散景效果;利用几何光学理论和序列光线追踪方法精确计算出出射光瞳的位置和大小,以辅助光线采样,提高光线追踪效率.绘制结果表明,利用该方法能够绘制出较为逼真的散景效果,正确模拟了孔径形状和渐晕对散景效果的影响,并具有较高的光线追踪效率.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号