首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
In the last decade, an increasing number of techniques have been developed to reproduce high dynamic range imagery on traditional displays. These techniques, known as Tone Mapping Operators (TMOs), have been compared and ranked in different ways according to several image characteristics. However, none of these algorithms has been developed specifically for small screen devices (SSD). In this paper, we present an evaluation of currently used TMOs to show that SSDs with limited size, resolution and colour depth require specific research to find or create an appropriate solution. The research described in this paper is based on psychophysical experiments; using three different types of displays (CRT, LCD and SSD). The obtained results show that rankings obtained are similar for the LCD and CRT but are significantly different for the SSD. Furthermore, these rankings show additionally that some characteristics of TMOs need to be emphasized to obtain better high‐fidelity mapped images for SSDs.  相似文献   

3.
High dynamic range (HDR) imagery permits the manipulation of real‐world data distinct from the limitations of the traditional, low dynamic range (LDR), content. The process of retargeting HDR content to traditional LDR imagery via tone mapping operators (TMOs) is useful for visualizing HDR content on traditional displays, supporting backwards‐compatible HDR compression and, more recently, is being frequently used for input into a wide variety of computer vision applications. This work presents the automatic generation of TMOs for specific applications via the evolutionary computing method of genetic programming (GP). A straightforward, generic GP method that generates TMOs for a given fitness function and HDR content is presented. Its efficacy is demonstrated in the context of three applications: Visualization of HDR content on LDR displays, feature mapping and compression. For these applications, results show good performance for the generated TMOs when compared to traditional methods. Furthermore, they demonstrate that the method is generalizable and could be used across various applications that require TMOs but for which dedicated successful TMOs have not yet been discovered.  相似文献   

4.
Mobile phones and tablets are rapidly gaining significance as omnipresent image and video capture devices. In this context we present an algorithm that allows such devices to capture high dynamic range (HDR) video. The design of the algorithm was informed by a perceptual study that assesses the relative importance of motion and dynamic range. We found that ghosting artefacts are more visually disturbing than a reduction in dynamic range, even if a comparable number of pixels is affected by each. We incorporated these findings into a real‐time, adaptive metering algorithm that seamlessly adjusts its settings to take exposures that will lead to minimal visual artefacts after recombination into an HDR sequence. It is uniquely suitable for real‐time selection of exposure settings. Finally, we present an off‐line HDR reconstruction algorithm that is matched to the adaptive nature of our real‐time metering approach.  相似文献   

5.
Eleven tone‐mapping operators intended for video processing are analyzed and evaluated with camera‐captured and computer‐generated high‐dynamic‐range content. After optimizing the parameters of the operators in a formal experiment, we inspect and rate the artifacts (flickering, ghosting, temporal color consistency) and color rendition problems (brightness, contrast and color saturation) they produce. This allows us to identify major problems and challenges that video tone‐mapping needs to address. Then, we compare the tone‐mapping results in a pair‐wise comparison experiment to identify the operators that, on average, can be expected to perform better than the others and to assess the magnitude of differences between the best performing operators.  相似文献   

6.
Capturing exposure sequences to compute high dynamic range (HDR) images causes motion blur in cases of camera movement. This also applies to light‐field cameras: frames rendered from multiple blurred HDR light‐field perspectives are also blurred. While the recording times of exposure sequences cannot be reduced for a single‐sensor camera, we demonstrate how this can be achieved for a camera array. Thus, we decrease capturing time and reduce motion blur for HDR light‐field video recording. Applying a spatio‐temporal exposure pattern while capturing frames with a camera array reduces the overall recording time and enables the estimation of camera movement within one light‐field video frame. By estimating depth maps and local point spread functions (PSFs) from multiple perspectives with the same exposure, regional motion deblurring can be supported. Missing exposures at various perspectives are then interpolated.  相似文献   

7.
The image quality of three organic light-emitting diode (OLED) based smart-phone displays was assessed at three levels of ambient lighting conditions corresponding to the darkroom, indoor and outdoor environment, respectively. Seven perceptual attributes, i.e., naturalness, colorfulness, brightness, contrast, sharpness, preference, and overall image quality (IQ), were evaluated in both standard dynamic range (SDR) and high dynamic range (HDR) mode via psychophysical experiments by rank order method, while readability was assessed only in SDR mode and gradation was investigated only in HDR mode. The experimental results demonstrate that, besides the color gamut, the tone reproduction curve is also an important factor affecting the colorfulness of mobile display in the two modes. Higher peak luminance would not mean better performance on brightness and contrast for HDR images, which is opposite to SDR mode. Further analysis of variance (ANOVA) indicates that the ranking results of all perceptual attributes are not significantly affected by the ambient lighting levels in both SDR and HDR modes.  相似文献   

8.
Typical high dynamic range (HDR) imaging approaches based on multiple images have difficulties in handling moving objects and camera shakes, suffering from the ghosting effect and the loss of sharpness in the output HDR image. While there exist a variety of solutions for resolving such limitations, most of the existing algorithms are susceptible to complex motions, saturation, and occlusions. In this paper, we propose an HDR imaging approach using the coded electronic shutter which can capture a scene with row‐wise varying exposures in a single image. Our approach enables a direct extension of the dynamic range of the captured image without using multiple images, by photometrically calibrating rows with different exposures. Due to the concurrent capture of multiple exposures, misalignments of moving objects are naturally avoided with significant reduction in the ghosting effect. To handle the issues with under‐/over‐exposure, noise, and blurs, we present a coherent HDR imaging process where the problems are resolved one by one at each step. Experimental results with real photographs, captured using a coded electronic shutter, demonstrate that our method produces a high quality HDR images without the ghosting and blur artifacts.  相似文献   

9.
In this paper we present a new practical camera characterization technique to improve color accuracy in high dynamic range (HDR) imaging. Camera characterization refers to the process of mapping device‐dependent signals, such as digital camera RAW images, into a well‐defined color space. This is a well‐understood process for low dynamic range (LDR) imaging and is part of most digital cameras — usually mapping from the raw camera signal to the sRGB or Adobe RGB color space. This paper presents an efficient and accurate characterization method for high dynamic range imaging that extends previous methods originally designed for LDR imaging. We demonstrate that our characterization method is very accurate even in unknown illumination conditions, effectively turning a digital camera into a measurement device that measures physically accurate radiance values — both in terms of luminance and color — rivaling more expensive measurement instruments.  相似文献   

10.
Cinemagraphs are a popular new type of visual media that lie in‐between photos and video; some parts of the frame are animated and loop seamlessly, while other parts of the frame remain completely still. Cinemagraphs are especially effective for portraits because they capture the nuances of our dynamic facial expressions. We present a completely automatic algorithm for generating portrait cinemagraphs from a short video captured with a hand‐held camera. Our algorithm uses a combination of face tracking and point tracking to segment face motions into two classes: gross, large‐scale motions that should be removed from the video, and dynamic facial expressions that should be preserved. This segmentation informs a spatially‐varying warp that removes the large‐scale motion, and a graph‐cut segmentation of the frame into dynamic and still regions that preserves the finer‐scale facial expression motions. We demonstrate the success of our method with a variety of results and a comparison to previous work.  相似文献   

11.
Although several new tone‐mapping operators are proposed each year, there is no reliable method to validate their performance or to tell how different they are from one another. In order to analyze and understand the behavior of tone‐mapping operators, we model their mechanisms by fitting a generic operator to an HDR image and its tone‐mapped LDR rendering. We demonstrate that the majority of both global and local tone‐mapping operators can be well approximated by computationally inexpensive image processing operations, such as a per‐pixel tone curve, a modulation transfer function and color saturation adjustment. The results produced by such a generic tone‐mapping algorithm are often visually indistinguishable from much more expensive algorithms, such as the bilateral filter. We show the usefulness of our generic tone‐mapper in backward‐compatible HDR image compression, the black‐box analysis of existing tone mapping algorithms and the synthesis of new algorithms that are combination of existing operators.  相似文献   

12.
Many works focus on multi‐spectral capture and analysis, but multi‐spectral display still remains a challenge. Most prior works on multi‐primary displays use ad‐hoc narrow band primaries that assure a larger color gamut, but cannot assure a good spectral reproduction. Content‐dependent spectral analysis is the only way to produce good spectral reproduction, but cannot be applied to general data sets. Wide primaries are better suited for assuring good spectral reproduction due to greater coverage of the spectral range, but have not been explored much. In this paper we explore the use of wide band primaries for accurate spectral reproduction for the first time and present the first content‐independent multi‐spectral display achieved using superimposed projections with modified wide band primaries. We present a content‐independent primary selection method that selects a small set of n primaries from a large set of m candidate primaries where m > n. Our primary selection method chooses primaries with complete coverage of the range of visible wavelength (for good spectral reproduction accuracy), low interdependency (to limit the primaries to a small number) and higher light throughput (for higher light efficiency). Once the primaries are selected, the input values of the different primary channels to generate a desired spectrum are computed using an optimization method that minimizes spectral mismatch while maximizing visual quality. We implement a real prototype of multi‐spectral display consisting of 9‐primaries using three modified conventional 3‐primary projectors, and compare it with a conventional display to demonstrate its superior performance. Experiments show our display is capable of providing large gamut assuring a good visual appearance while displaying any multi‐spectral images at a high spectral accuracy.  相似文献   

13.
Image calibration requires both linearization of pixel values and scaling so that values in the image correspond to real‐world luminances. In this paper we focus on the latter and rather than rely on camera characterization, we calibrate images by analysing their content and metadata, obviating the need for expensive measuring devices or modeling of lens and camera combinations. Our analysis correlates sky pixel values to luminances that would be expected based on geographical metadata. Combined with high dynamic range (HDR) imaging, which gives us linear pixel data, our algorithm allows us to find absolute luminance values for each pixel—effectively turning digital cameras into absolute light meters. To validate our algorithm we have collected and annotated a calibrated set of HDR images and compared our estimation with several other approaches, showing that our approach is able to more accurately recover absolute luminance. We discuss various applications and demonstrate the utility of our method in the context of calibrated color appearance reproduction and lighting design.  相似文献   

14.
Display devices, more than ever, are finding their ways into electronic consumer goods as a result of recent trends in providing more functionality and user interaction. Combined with the new developments in display technology towards higher reproducible luminance range, the mobility and variation in capability of display devices are constantly increasing. Consequently, in real life usage it is now very likely that the display emission to be distorted by spatially and temporally varying reflections, and the observer's visual system to be not adapted to the particular display that she is viewing at that moment. The actual perception of the display content cannot be fully understood by only considering steady-state illumination and adaptation conditions. We propose an objective method for display visibility analysis formulating the problem as a full-reference image quality assessment problem, where the display emission under "ideal" conditions is used as the reference for real-life conditions. Our work includes a human visual system model that accounts for maladaptation and temporal recovery of sensitivity. As an example application we integrate our method to a global illumination simulator and analyze the visibility of a car interior display under realistic lighting conditions.  相似文献   

15.
This paper presents methods for photo‐realistic rendering using strongly spatially variant illumination captured from real scenes. The illumination is captured along arbitrary paths in space using a high dynamic range, HDR, video camera system with position tracking. Light samples are rearranged into 4‐D incident light fields (ILF) suitable for direct use as illumination in renderings. Analysis of the captured data allows for estimation of the shape, position and spatial and angular properties of light sources in the scene. The estimated light sources can be extracted from the large 4D data set and handled separately to render scenes more efficiently and with higher quality. The ILF lighting can also be edited for detailed artistic control.  相似文献   

16.
Despite their high popularity, common high dynamic range (HDR) methods are still limited in their practical applicability: They assume that the input images are perfectly aligned, which is often violated in practise. Our paper does not only free the user from this unrealistic limitation, but even turns the missing alignment into an advantage: By exploiting the multiple exposures, we can create a super‐resolution image. The alignment step is performed by a modern energy‐based optic flow approach that takes into account the varying exposure conditions. Moreover, it produces dense displacement fields with subpixel precision. As a consequence, our approach can handle arbitrary complex motion patterns, caused by severe camera shake and moving objects. Additionally, it benefits from several advantages over existing strategies: (i) It is robust under outliers (noise, occlusions, saturation problems) and allows for sharp discontinuities in the displacement field. (ii) The alignment step neither requires camera calibration nor knowledge of the exposure times. (iii) It can be efficiently implemented on CPU and GPU architectures. After the alignment is performed, we use the obtained subpixel accurate displacement fields as input for an energy‐based, joint super‐resolution and HDR (SR‐HDR) approach. It introduces robust data terms and anisotropic smoothness terms in the SR‐HDR literature. Our experiments with challenging real world data demonstrate that these novelties are pivotal for the favourable performance of our approach.  相似文献   

17.
We present a novel stereo‐to‐multiview video conversion method for glasses‐free multiview displays. Different from previous stereo‐to‐multiview approaches, our mapping algorithm utilizes the limited depth range of autostereoscopic displays optimally and strives to preserve the scene's artistic composition and perceived depth even under strong depth compression. We first present an investigation of how perceived image quality relates to spatial frequency and disparity. The outcome of this study is utilized in a two‐step mapping algorithm, where we (i) compress the scene depth using a non‐linear global function to the depth range of an autostereoscopic display and (ii) enhance the depth gradients of salient objects to restore the perceived depth and salient scene structure. Finally, an adapted image domain warping algorithm is proposed to generate the multiview output, which enables overall disparity range extension.  相似文献   

18.
Decomposing an input image into its intrinsic shading and reflectance components is a long‐standing ill‐posed problem. We present a novel algorithm that requires no user strokes and works on a single image. Based on simple assumptions about its reflectance and luminance, we first find clusters of similar reflectance in the image, and build a linear system describing the connections and relations between them. Our assumptions are less restrictive than widely‐adopted Retinex‐based approaches, and can be further relaxed in conflicting situations. The resulting system is robust even in the presence of areas where our assumptions do not hold. We show a wide variety of results, including natural images, objects from the MIT dataset and texture images, along with several applications, proving the versatility of our method.  相似文献   

19.
We present Video Brush, a novel interface for interactive video cutout. Inspired by the progressive selection scheme in images, our interface is designed to select video objects by painting on successive frames as the video plays. The video objects are progressively selected by solving the graph‐cut based local optimization according to the strokes drawn by the brush on each painted frame. In order to provide users interactive feedback, we accelerate 3D graph‐cut by efficient graph building and multi‐level banded graph‐cut. Experimental results show that our novel interface is both intuitive and efficient for video cutout.  相似文献   

20.
Environment‐mapped rendering of Lambertian isotropic surfaces is common, and a popular technique is to use a quadratic spherical harmonic expansion. This compact irradiance map representation is widely adopted in interactive applications like video games. However, many materials are anisotropic, and shading is determined by the local tangent direction, rather than the surface normal. Even for visualization and illustration, it is increasingly common to define a tangent vector field, and use anisotropic shading. In this paper, we extend spherical harmonic irradiance maps to anisotropic surfaces, replacing Lambertian reflectance with the diffuse term of the popular Kajiya‐Kay model. We show that there is a direct analogy, with the surface normal replaced by the tangent. Our main contribution is an analytic formula for the diffuse Kajiya‐Kay BRDF in terms of spherical harmonics; this derivation is more complicated than for the standard diffuse lobe. We show that the terms decay even more rapidly than for Lambertian reflectance, going as l–3, where l is the spherical harmonic order, and with only 6 terms (l = 0 and l = 2) capturing 99.8% of the energy. Existing code for irradiance environment maps can be trivially adapted for real‐time rendering with tangent irradiance maps. We also demonstrate an application to offline rendering of the diffuse component of fibers, using our formula as a control variate for Monte Carlo sampling.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号