首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract— The retinal adaptation process helps the human visual system to see high‐dynamic‐range (HDR) scenes in the real world. A simple static local adaptation method for HDR image compression based on a retinal model is presented. The proposed simple model aims at recreating the same sensations of the human visual system between the real‐world scene and the range compressed image on the display device when viewed after the human visual system reaches the steady local adaptation state, respectively. In computing scene local adaptation, the use of a non‐linear edge‐preserving bilateral filter not only presents a better tonal rendition in compressing local contrast and preserving details but also avoids banding artifacts across high‐gradient edges. Our new model relates the display adaptation with the scene adaptation based on the retinal model. In order to verify the effectiveness, a subjective evaluation is made by comparing the real scene and the display image using the paired comparison technique.  相似文献   

2.
高动态范围图像梯度压缩算法   总被引:1,自引:0,他引:1       下载免费PDF全文
刘冬梅  赵宇明 《计算机工程》2009,35(20):210-211
高动态范围(HDR)图像是一种可以表示实际场景中亮度大范围变化的图像类型,图像中的像素值正比于场景中对应点的实际亮度值,因此,可以更好地表示场景中亮区和暗区的光学特性。为了在常规显示硬件上显示HDR图像,采用梯度压缩算法,在亮度图像梯度域上对大梯度进行衰减,压缩图像亮度的动态范围。实验结果表明,该算法能对HDR图像进行较高视觉质量的显示。  相似文献   

3.
一种高动态范围图像可视化算法*   总被引:4,自引:0,他引:4  
提出的自适应HDR图像可视化算法中,输入图像被分解为基本层和细节层.该算法将整体明暗效果的显示看做整体问题,对表示亮度的基本层采用基于整体统计信息的直方图调整算法处理;可视细节信息的保持作为局部问题,算法采用自适应细节增强算法处理细节层.通过定义映射图对细节增强后的图像进行最终映射,将两方面结合起来得到最终结果.实验结果表明,该算法能对HDR图像进行较高视觉质量的显示.  相似文献   

4.
With the emergence of high‐dynamic range (HDR) image and video, displaying an HDR image or video on a standard dynamic range displays with natural visual effect, clear details on local areas, and high‐contrast ratio has become an important issue. To achieve HDR, local‐dimming technique has been commonly practiced. In this paper, we investigate the problem of local dimming for LED‐backlight LC display to provide algorithm support for developing HDR displays. A novel local‐dimming algorithm is proposed to improve the contrast ratio, enhance the visual quality, and reduce the power consumption of the LCDs. The proposed algorithm mainly consists of two parts. The first part is a backlight luminance extraction method based on dynamic threshold and the maximum grayscale of an image block to improve the contrast ratio and reduce the power consumption. In the second one, a pixel compensation method based on logarithmic function is used to improve the visual quality and contrast ratio. At the same time, in order to better smooth the backlight diffusion at the edges of the backlight luminance signal to enhance the accuracy of the pixel compensation, we draw on the idea of BMA and improve it to establish the backlight diffusion model with different low‐pass‐filter templates for different types of blocks. Simulation and measured results show that the proposed algorithm outperforms the competing ones in contrast ratio, visual quality, and power‐saving ratio.  相似文献   

5.
Mobile phones and tablets are rapidly gaining significance as omnipresent image and video capture devices. In this context we present an algorithm that allows such devices to capture high dynamic range (HDR) video. The design of the algorithm was informed by a perceptual study that assesses the relative importance of motion and dynamic range. We found that ghosting artefacts are more visually disturbing than a reduction in dynamic range, even if a comparable number of pixels is affected by each. We incorporated these findings into a real‐time, adaptive metering algorithm that seamlessly adjusts its settings to take exposures that will lead to minimal visual artefacts after recombination into an HDR sequence. It is uniquely suitable for real‐time selection of exposure settings. Finally, we present an off‐line HDR reconstruction algorithm that is matched to the adaptive nature of our real‐time metering approach.  相似文献   

6.
Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world. With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the light transport hidden in the recorded per‐sample color response to relight virtual objects in visual effects (VFX) look‐dev or augmented reality (AR) scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real‐time Point‐Based Global Illumination (PBGI). First, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene that can cover part of it. We perform this expansion efficiently by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions not covered by the renderings or with low‐quality dynamic range by solving a Poisson system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G‐buffers, to achieve real‐time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically‐based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step from the perfect ground truth. We also report experiments with real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.  相似文献   

7.
Realism is often a primary goal in computer graphics imagery, and we strive to create images that are perceptually indistinguishable from an actual scene. Rendering systems can now closely approximate the physical distribution of light in an environment. However, physical accuracy does not guarantee that the displayed images will have authentic visual appearance. In recent years the emphasis in realistic image synthesis has begun to shift from the simulation of light in an environment to images that look as real as the physical environment they portray. In other words the computer image should be not only physically correct but also perceptually equivalent to the scene it represents. This implies aspects of the Human Visual System (HVS) must be considered if realism is required. Visual perception is employed in many different guises in graphics to achieve authenticity. Certain aspects of the visual system must be considered to identify the perceptual effects that a realistic rendering system must achieve in order to reproduce effectively a similar visual response to a real scene. This paper outlines the manner in which knowledge about visual perception is increasingly appearing in state‐of‐the‐art realistic image synthesis. After a brief overview of the HVS, this paper is organized into four sections, each exploring the use of perception in realistic image synthesis, each with slightly different emphasis and application. First, Tone Mapping Operators, which attempt to map the vast range of computed radiance values to the limited range of display values, are discussed. Then perception based image quality metrics, which aim to compare images on a perceptual rather than physical basis, are presented. These metrics can be used to evaluate, validate and compare imagery. Thirdly, perception driven rendering algorithms are described. These algorithms focus on embedding models of the HVS directly into global illumination computations in order to improve their efficiency. Finally, techniques for comparing computer graphics imagery against the real world scenes they represent are discussed.  相似文献   

8.
Abstract— High‐dynamic‐range (HDR) image capture and display has become an important engineering topic. The discipline of reproducing scenes with a high range of luminances has a five‐century history that includes painting, photography, electronic imaging, and image processing. HDR images are superior to conventional images. There are two fundamental scientific issues that control HDR image capture and reproduction. The first is the range of information that can be measured using different techniques. The second is the range of image information that can be utilized by humans. Optical veiling glare severely limits the range of luminance that can be captured and seen. It is the improved quantization of digital data and the preservation of the scene's spatial information that causes the improvement in quality in HDR reproductions.  相似文献   

9.
10.
Typical high dynamic range (HDR) imaging approaches based on multiple images have difficulties in handling moving objects and camera shakes, suffering from the ghosting effect and the loss of sharpness in the output HDR image. While there exist a variety of solutions for resolving such limitations, most of the existing algorithms are susceptible to complex motions, saturation, and occlusions. In this paper, we propose an HDR imaging approach using the coded electronic shutter which can capture a scene with row‐wise varying exposures in a single image. Our approach enables a direct extension of the dynamic range of the captured image without using multiple images, by photometrically calibrating rows with different exposures. Due to the concurrent capture of multiple exposures, misalignments of moving objects are naturally avoided with significant reduction in the ghosting effect. To handle the issues with under‐/over‐exposure, noise, and blurs, we present a coherent HDR imaging process where the problems are resolved one by one at each step. Experimental results with real photographs, captured using a coded electronic shutter, demonstrate that our method produces a high quality HDR images without the ghosting and blur artifacts.  相似文献   

11.
Abstract— The ideal frame rate for the highest motion‐image quality with respect to blur and jerkiness is presented. In order to determine the requirements for avoiding these impairments, motion images from a high‐speed camera and computer graphics were combined with a high‐speed display to perform a psychophysical evaluation. The camera, operating at 1000 fps, and image processing were used to simulate various frame rates and shutter speeds, and a 480‐Hz CRT display was used to present motion images simulating various frame rates and time characteristics of the display. Subjects were asked to evaluate the difference in quality between motion images at various frame rates. A frame rate of 480 fps was chosen to be an appropriate reference frame rate that, as a first estimation, enables coverage up to the human‐dynamic‐resolution (HDR) limit based on another experiment using real moving charts. The results show that a frame rate of 120 fps provides good improvement compared to that of 60 fps, and that the maximum improvement beyond which evaluation is saturated is found at about 240 fps for representative standard‐resolution natural images.  相似文献   

12.
高动态范围图像和色阶映射算子   总被引:2,自引:0,他引:2  
图像传感器动态响应范围的局限使其在捕捉高动态范围场景时力不从心, 为了捕捉高动态范围图像(High dynamic range image, HDRI), 近年来出现了许多新型传感器和新方法, 本文将简要介绍这些研究进展; 同样由于动态响应范围的局限, 显示设备也不能胜任HDRI的显示, 必须利用色阶映射算子(Tone mapping operator, TMO)将图像的动态范围进行合理的压缩, TMO最终决定了图像显示的质量, 本文将众多的TMO归纳为全局算子和局部算子并进行了详细论述.  相似文献   

13.
为解决传统多曝光图像融合的实时性和动态场景鬼影消除问题,提出了基于灰度级映射函数建模的多曝光高动态图像重建算法。对任意大小的低动态范围(Low dynamic range,LDR)图像序列,仅需拟合与灰阶数目相同个数而不是与相机分辨率个数相同的视觉适应的S形曲线,利用最佳成像值判别方法直接融合,提高了算法的融合效率,能够达到实时性图像融合要求。对动态场景的融合,设计灰度级映射关系恢复理想状态的多曝光图像,利用差分法检测运动目标区域,作鬼影消除处理,融合得到一幅能够反映真实场景信息且不受鬼影影响的高动态范围图像。  相似文献   

14.
Since high dynamic range (HDR) displays are not yet widely available, there is still a need to perform a dynamic range reduction of HDR content to reproduce it properly on standard dynamic range (SDR) displays. The most common techniques for performing this reduction are termed tone‐mapping operators (TMOs). Although mobile devices are becoming widespread, methods for displaying HDR content on these SDR screens are still very much in their infancy. While several studies have been conducted to evaluate TMOs, few have been done with a goal of testing small screen displays (SSDs), common on mobile devices. This paper presents an evaluation of six state‐of‐the‐art HDR video TMOs. The experiments considered three different levels of ambient luminance under which 180 participants were asked to rank the TMOs for seven tone‐mapped HDR video sequences. A comparison was conducted between tone‐mapped HDR video footage shown on an SSD and on a large screen SDR display using an HDR display as reference. The results show that there are differences between the performance of the TMOs under different ambient lighting levels and the TMOs that perform well on traditional large screen displays also perform well on SSDs at the same given luminance level.  相似文献   

15.
Abstract— Conjugate‐optical retroreflector (COR) display systems have the potential for providing inexpensive high‐resolution imagery in a head‐mounted display (HMD) configuration. There are several perceptual issues, however, that need to be addressed before a COR display system can be used effectively. One issue is the choice of projected‐image location relative to the retroreflective screen, which is determined by the convergence angle between the binocular channels of the COR display. Another issue involves visual half‐occlusions, which can occur when a portion of a stereoscopic image is visible to only one eye, as may occur in any HMD. If half occlusions are simulated in a COR display in a way that is inconsistent with natural viewing, undesirable perceptual effects may result. In the present paper, we first describe, the optical principles that underlie the COR display system. We then discuss the importance of binocular convergence and describe a COR display configuration that eliminates inconsistencies in the depth cues provided by displayed surface properties and halfocclusions.  相似文献   

16.
Multi‐planar plenoptic displays consist of multiple spatially varying light‐emitting and light‐modulating planes. In this work, we introduce a framework to display light field data on this new type of display device. First, we present a mathematical notation that describes each of the layers in terms of the corresponding light transport operators. Next, we explain an algorithm that renders a light field with depth into a given multi‐planar plenoptic display and analyze the approximation error. We show two different physical prototypes that we have designed and built: The first design uses a dynamic parallax barrier and a number of bi‐state (translucent/opaque) screens. The second design uses a beam splitter to co‐locate two pairs of parallax barriers and static image projection screens. We evaluate both designs on a number of different 3D scenes. Finally, we present simulated and real results for different display configurations.  相似文献   

17.
Abstract— Augmented reality (AR) is a technology in which computer‐generated virtual images are dynamically superimposed upon a real‐world scene to enhance a user's perceptions of the physical environment. A successful AR system requires that the overlaid digital information be aligned with the user's real‐world senses — a process known as registration. An accurate registration process requires the knowledge of both the intrinsic and extrinsic parameters of the viewing device and these parameters form the viewing and projection transformations for creating the simulations of virtual images. In our previous work, an easy off‐line calibration method in which an image‐based automatic matching method was used to establish the world‐to‐image correspondences was presented, and it is able to achieve subpixel accuracy. However, this off‐line method yields accurate registration only when a user's eye placements relative to the display device coincides with locations established during the offline calibration process. A likely deviation of eye placements, for instance, due to helmet slippage or user‐dependent factors such as interpupillary distance, will lead to misregistration. In this paper, a systematic on‐line calibration framework to refine the off‐line calibration results and to account for user‐dependent factors is presented. Specifically, based on an equivalent viewing projection model, a six‐parameter on‐line calibration method to refine the user‐dependent parameters in the viewing transformations is presented. Calibration procedures and results as well as evaluation experiments are described in detail. The evaluation experiments demonstrate the improvement of the registration accuracy.  相似文献   

18.
Abstract— With interest in high‐dynamic‐range imaging mounting, techniques for displaying such images on conventional display devices are gaining in importance. Conversely, high‐dynamic‐range display hardware is creating the need for display algorithms that prepare images for such displays. In this paper, the current state of the art in dynamic‐range reduction and expansion is reviewed, and in particular the theoretical and practical need to structure tone reproduction as a combination of a forward and a reverse pass is passed.  相似文献   

19.
Augmented reality (AR) display technology greatly enhances users' perception of and interaction with the real world by superimposing a computer‐generated virtual scene on the real physical world. The main problem of the state‐of‐the‐art 3D AR head‐mounted displays (HMDs) is the accommodation‐vergence conflict because the 2D images displayed by flat panel devices are at a fixed distance from the eyes. In this paper, we present a design for an optical see‐through HMD utilizing multi‐plane display technology for AR applications. This approach manages to provide correct depth information and solves the accommodation‐vergence conflict problem. In our system, a projector projects slices of a 3D scene onto a stack of polymer‐stabilized liquid crystal scattering shutters in time sequence to reconstruct the 3D scene. The polymer‐stabilized liquid crystal shutters have sub‐millisecond switching time that enables sufficient number of shutters to achieve high depth resolution. A proof‐of‐concept two‐plane optical see‐through HMD prototype is demonstrated. Our design can be made lightweight, compact, with high resolution, and large depth range from near the eye to infinity and thus holds great potential for fatigue‐free AR HMDs.  相似文献   

20.
We present an image‐based rendering system to viewpoint‐navigate through space and time of complex real‐world, dynamic scenes. Our approach accepts unsynchronized, uncalibrated multivideo footage as input. Inexpensive, consumer‐grade camcorders suffice to acquire arbitrary scenes, for example in the outdoors, without elaborate recording setup procedures, allowing also for hand‐held recordings. Instead of scene depth estimation, layer segmentation or 3D reconstruction, our approach is based on dense image correspondences, treating view interpolation uniformly in space and time: spatial viewpoint navigation, slow motion or freeze‐and‐rotate effects can all be created in the same way. Acquisition simplification, integration of moving cameras, generalization to difficult scenes and space–time symmetric interpolation amount to a widely applicable virtual video camera system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号