首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
基于彩色图像中的色调信息不易受到镜面反射干扰这一事实,文中提出基于色调约束的镜面反射分离算法.首先,利用图像的色调信息对图像进行聚类.再计算像素色度与照明色度的距离,求得漫反射和镜面反射的融合系数.同时,为了让像素聚类免受噪声干扰,对融合系数执行双边滤波操作.最后,根据已求得的融合系数,得到消除镜面反射后的漫反射图像.实验表明,文中算法能在有效去除镜面反射的同时保留图像的细节与边缘信息,在对自然高光图像的处理中也取得较佳的视觉效果.  相似文献   

2.
Separating reflection components based on chromaticity and noise analysis   总被引:2,自引:0,他引:2  
Many algorithms in computer vision assume diffuse only reflections and deem specular reflections to be outliers. However, in the real world, the presence of specular reflections is inevitable since there are many dielectric inhomogeneous objects which have both diffuse and specular reflections. To resolve this problem, we present a method to separate the two reflection components. The method is principally based on the distribution of specular and diffuse points in a two-dimensional maximum chromaticity-intensity space. We found that, by utilizing the space and known illumination color, the problem of reflection component separation can be simplified into the problem of identifying diffuse maximum chromaticity. To be able to identity the diffuse maximum chromaticity correctly, an analysis of the noise is required since most real images suffer from it. Unlike existing methods, the proposed method can separate the reflection components robustly for any kind of surface roughness and light direction.  相似文献   

3.
目的 由于非均匀光照条件下,物体表面通常出现块状的强反射区域,传统的去高光方法在还原图像时容易造成颜色失真或者边缘的丢失。针对这些缺点,提出一种改进的基于双边滤波的去高光方法。方法 首先通过双色反射模型变换得到镜面反射分量与最大漫反射色度之间的转换关系,然后利用阈值将图像的像素点分为两类,将仅含漫反射分量的像素点与含有镜面反射分量的像素点分离开来,对两类像素点的最大漫反射色度分别做估计,接着以估计的最大漫反射色度的相似度作为双边滤波器的值域,同时以图像的最大色度图作为双边滤波的引导图保边去噪,进而达到去除镜面反射分量的目的。结果 以经典的高光图像作为处理对象,对含有镜面反射和仅含漫反射的像素点分别做最大漫反射色度估计,再以该估计图作为双边滤波的引导图,不仅能去除镜面反射分量还能有效的保留图像的边缘信息,最大程度的还原图像细节颜色,并且解决了原始算法处理结果中R、G、B三通道相似的像素点所出现的颜色退化问题。用改进的双边滤波去高光算法对50幅含高光的图像做处理,并将该算法与Yang方法和Shen方法分别作对比,结果图的峰值信噪比(PSNR)也分别平均提高4.17%和8.40%,所提算法的处理效果更符合人眼视觉,图像质量更好。结论 实验结果表明针对含镜面反射的图像,本文方法能够更有效去除图像的多区域局部高光,完成对图像的复原,可为室内外光照不匀情况下所采集图像的复原提供有效理论基础。  相似文献   

4.
在各向异性的物体中,高光被视为是漫反射分量以及镜面反射分量的一种线性组合。单幅图像的高光去除是计算机视觉中一项非常有挑战性的课题。很多方法试图将漫反射分量、镜面反射分量进行分离,然而这些方法往往需要图像分割等预处理过程,方法鲁棒性较差且比较耗时。基于双边滤波器设计了一种高效的高光消除方法,该方法利用最大漫反射色度存在着局部平滑这一性质,使用双边滤波器对色度的最大取值进行传播与扩散,从而完成整幅图像高光去除。方法采用一种加速策略对双边滤波器进行速度优化,与目前流行的方法相比,有效提升了方法的执行效率。与传统方法相比,该方法高光去除效果更好,处理速度更快,非常适用于一些实时应用的场合。  相似文献   

5.
用于去除单张图像高光的光照约束补色   总被引:5,自引:0,他引:5       下载免费PDF全文
在计算机视觉研究领域,如何检测和消除图像中的高光(specular)一直是个热点问题,有关的研究结果对于提高计算机视觉算法性能有着重要的影响.针对这一问题,提出了一种检测和消除高光的方法.首先,通过比较高光和漫反射光(diffuse)的色度特性的不同,给出了一种交互检测单色物体表面高光区域的方法;然后,引入补色(inpainting)方法并结合光照约束条件,设计了一种去除单张图像中高光并还原出漫反射分量的新的补色算法.与一般补色方法不同,该算法充分利用了高光区域含有的信息来指导补色过程.通过综合利用观测到的像素值、光源的色度分析(illumination chromaticity analysis)、光源颜色的平滑性等来约束补色过程,保证了算法能够克服一般的补色方法无法保持物体表面细微明暗变化的缺点.实验结果表明,与以往的去除单张图像高光的方法相比,该算法能够提供更好的光源色度估计,从而得到更准确的结果.  相似文献   

6.
高光去除是计算机视觉领域研究的一个热点问题.现有的基于双色反色模型分离漫反射分量和镜面反射分量去除单幅图像中的高光的方法,容易引起图像颜色失真和纹理的丢失.针对此问题,在使用像素强度比去高光的基础上改进了像素聚类算法,能够更准确的进行像素分类,改善图像颜色失真的现象.首先计算原图像与最小强度值单通道图像的差值得到无高光图像.然后根据无高光图像计算与高光区域相关的每个像素点的最大漫反射色度值和最小漫反射色度值.最后将高光区域内的像素点转换到最小最大色度空间,对高光区域内的像素点进行xmeans聚类,利用分类后漫反射像素点的强度比估计值很容易分离高光区域像素点的镜面反射分量,从而得到去高光图像.实验结果表明,与现有的方法对比,峰值信噪比值平均提升了2%至4%,图像颜色失真和纹理丢失状况得到改善,视觉效果更好.  相似文献   

7.
Separation of Reflection Components Using Color and Polarization   总被引:4,自引:0,他引:4  
Specular reflections and interreflections produce strong highlights in brightness images. These highlights can cause vision algorithms for segmentation, shape from shading, binocular stereo, and motion estimation to produce erroneous results. A technique is developed for separating the specular and diffuse components of reflection from images. The approach is to use color and polarization information, simultaneously, to obtain constraints on the reflection components at each image point. Polarization yields local and independent estimates of the color of specular reflection. The result is a linear subspace in color space in which the local diffuse component must lie. This subspace constraint is applied to neighboring image points to determine the diffuse component. In contrast to previous separation algorithms, the proposed method can handle highlights on surfaces with substantial texture, smoothly varying diffuse reflectance, and varying material properties. The separation algorithm is applied to several complex scenes with textured objects and strong interreflections. The separation results are then used to solve three problems pertinent to visual perception; determining illumination color, estimating illumination direction, and shape recovery.  相似文献   

8.
单幅图片的高光去除是一个非常具有挑战性的课题。综述以往多数方法,一般需要进行图像分割等预处理,或者要求用户进行交互输入。采用的方法是从高光图片的颜色统计规律出发,发现了最大漫反射色度局部平滑这一特性;然后估计镜面反射像素最大漫反射色度,由基于线性模型对最大色度的值进行扩散传播,从图像中的漫反射像素传播到镜面反射像素;最后求出图像中每个像素的漫反射分量。与传统方法相比较,这种高光去除的方法效果较好,而且非常简单,适合并行,可以满足实时应用需要。  相似文献   

9.
We present a computational model and algorithm for detecting diffuse and specular interface reflections and some inter-reflections. Our color reflection model is based on the dichromatic model for dielectric materials and on a color space, called S space, formed with three orthogonal basis functions. We transform color pixels measured in RGB into the S space and analyze color variations on objects in terms of brightness, hue and saturation which are defined in the S space. When transforming the original RGB data into the S space, we discount the scene illumination color that is estimated using a white reference plate as an active probe. As a result, the color image appears as if the scene illumination is white. Under the whitened illumination, the interface reflection clusters in the S space are all aligned with the brightness direction. The brightness, hue and saturation values exhibit a more direct correspondence to body colors and to diffuse and specular interface reflections, shading, shadows and inter-reflections than the RGB coordinates. We exploit these relationships to segment the color image, and to separate specular and diffuse interface reflections and some inter-reflections from body reflections. The proposed algorithm is effications for uniformly colored dielectric surfaces under singly colored scene illumination. Experimental results conform to our model and algorithm within the liminations discussed.  相似文献   

10.
Intrinsic images are a mid‐level representation of an image that decompose the image into reflectance and illumination layers. The reflectance layer captures the color/texture of surfaces in the scene, while the illumination layer captures shading effects caused by interactions between scene illumination and surface geometry. Intrinsic images have a long history in computer vision and recently in computer graphics, and have been shown to be a useful representation for tasks ranging from scene understanding and reconstruction to image editing. In this report, we review and evaluate past work on this problem. Specifically, we discuss each work in terms of the priors they impose on the intrinsic image problem. We introduce a new synthetic ground‐truth dataset that we use to evaluate the validity of these priors and the performance of the methods. Finally, we evaluate the performance of the different methods in the context of image‐editing applications.  相似文献   

11.
Automatic decomposition of intrinsic images, especially for complex real‐world images, is a challenging under‐constrained problem. Thus, we propose a new algorithm that generates and combines multi‐scale properties of chromaticity differences and intensity contrast. The key observation is that the estimation of image reflectance, which is neither a pixel‐based nor a region‐based property, can be improved by using multi‐scale measurements of image content. The new algorithm iteratively coarsens a graph reflecting the reflectance similarity between neighbouring pixels. Then multi‐scale reflectance properties are aggregated so that the graph reflects the reflectance property at different scales. This is followed by a L0 sparse regularization on the whole reflectance image, which enforces the variation in reflectance images to be high‐frequency and sparse. We formulate this problem through energy minimization which can be solved efficiently within a few iterations. The effectiveness of the new algorithm is tested with the Massachusetts Institute of Technology (MIT) dataset, the Intrinsic Images in the Wild (IIW) dataset, and various natural images.  相似文献   

12.
Several techniques have been developed for recovering reflectance properties of real surfaces under unknown illumination. However, in most cases, those techniques assume that the light sources are located at infinity, which cannot be applied safely to, for example, reflectance modeling of indoor environments. In this paper, we propose two types of methods to estimate the surface reflectance property of an object, as well as the position of a light source from a single view without the distant illumination assumption, thus relaxing the conditions in the previous methods. Given a real image and a 3D geometric model of an object with specular reflection as inputs, the first method estimates the light source position by fitting to the Lambertian diffuse component, while separating the specular and diffuse components by using an iterative relaxation scheme. Our second method extends that first method by using as input a specular component image, which is acquired by analyzing multiple polarization images taken from a single view, thus removing its constraints on the diffuse reflectance property. This method simultaneously recovers the reflectance properties and the light source positions by optimizing the linearity of a log-transformed Torrance-Sparrow model. By estimating the object's reflectance property and the light source position, we can freely generate synthetic images of the target object under arbitrary lighting conditions with not only source direction modification but also source-surface distance modification. Experimental results show the accuracy of our estimation framework.  相似文献   

13.
增强现实技术的目的在于将计算机生成的虚拟物体叠加到真实场景中。实现良好的虚实融合需要对场景光照进行估算,针对高光场景,利用场景中的不同反射光信息对场景进行有效的光照估计,首先通过基于像素聚类方法的图像分解对图像进行反射光的分解,得到漫反射图和镜面反射图,对漫反射图进行进一步的本征图像分解,得到反照率图和阴影图;之后结合分解结果和场景深度对输入图像的光照信息进行计算;最后使用全局光照模型对虚拟物体进行渲染,可以得到虚实场景高度融合的光照效果。  相似文献   

14.
Iridescence is a natural phenomenon that is perceived as gradual color changes, depending on the view and illumination direction. Prominent examples are the colors seen in oil films and soap bubbles. Unfortunately, iridescent effects are particularly difficult to recreate in real‐time computer graphics. We present a high‐quality real‐time method for rendering iridescent effects under image‐based lighting. Previous methods model dielectric thin‐films of varying thickness on top of an arbitrary micro‐facet model with a conducting or dielectric base material, and evaluate the resulting reflectance term, responsible for the iridescent effects, only for a single direction when using real‐time image‐based lighting. This leads to bright halos at grazing angles and over‐saturated colors on rough surfaces, which causes an unnatural appearance that is not observed in ground truth data. We address this problem by taking the distribution of light directions, given by the environment map and surface roughness, into account when evaluating the reflectance term. In particular, our approach prefilters the first and second moments of the light direction, which are used to evaluate a filtered version of the reflectance term. We show that the visual quality of our approach is superior to the ones previously achieved, while having only a small negative impact on performance.  相似文献   

15.
A new method is described to estimate diffuse and specular reflectance parameters using spectral images, which overcomes the dynamic range limitation of imaging devices. After eliminating the influences of illumination and camera on spectral images, reflection values are initially assumed as diffuse-only reflection components, and subjected to the least squares method to estimate diffuse reflectance parameters at each wavelength on each single surface particle. Based on the dichromatic reflection model, specular reflection components are obtained, and then subjected to the least squares method to estimate specular reflectance parameters for gloss intensity and surface roughness. Experiments were carried out using both simulation data and measured spectral images. Our results demonstrate that this method is capable of estimating diffuse and specular reflectance parameters precisely for color and gloss reproduction, without requiring preprocesses such as image segmentation and synthesis of high dynamic range images.  相似文献   

16.
Estimating the correspondence between the images using optical flow is the key component for image fusion, however, computing optical flow between a pair of facial images including backgrounds is challenging due to large differences in illumination, texture, color and background in the images. To improve optical flow results for image fusion, we propose a novel flow estimation method, wavelet flow, which can handle both the face and background in the input images. The key idea is that instead of computing flow directly between the input image pair, we estimate the image flow by incorporating multi‐scale image transfer and optical flow guided wavelet fusion. Multi‐scale image transfer helps to preserve the background and lighting detail of input, while optical flow guided wavelet fusion produces a series of intermediate images for further fusion quality optimizing. Our approach can significantly improve the performance of the optical flow algorithm and provide more natural fusion results for both faces and backgrounds in the images. We evaluate our method on a variety of datasets to show its high outperformance.  相似文献   

17.
We present a practical and robust photorealistic rendering pipeline for augmented reality. We solve the real world lighting conditions from observations of a diffuse sphere or a rotated marker. The solution method is based on l 1-regularized least squares minimization, yielding a sparse set of light sources readily usable with most rendering methods. The framework also supports the use of more complex light source representations. Once the lighting conditions are solved, we render the image using modern real-time rendering methods such as shadow maps with variable softness, ambient occlusion, advanced BRDF’s and approximate reflections and refractions. Finally, we perform post-processing on the resulting images in order to match the various aberrations and defects typically found in the underlying real-world video.  相似文献   

18.
Variations in illumination degrade the performance of appearance based face recognition. We present a novel algorithm for the normalization of color facial images using a single image and its co-registered 3D pointcloud (3D image). The algorithm borrows the physically based Phong’s lighting model from computer graphics which is used for rendering computer images and employs it in a reverse mode for the calculation of face albedo from real facial images. Our algorithm estimates the number of the dominant light sources and their directions from the specularities in the facial image and the corresponding 3D points. The intensities of the light sources and the parameters of the Phong’s model are estimated by fitting the Phong’s model onto the facial skin data. Unlike existing approaches, our algorithm takes into account both Lambertian and specular reflections as well as attached and cast shadows. Moreover, our algorithm is invariant to facial pose and expression and can effectively handle the case of multiple extended light sources. The algorithm was tested on the challenging FRGC v2.0 data and satisfactory results were achieved. The mean fitting error was 6.3% of the maximum color value. Performing face recognition using the normalized images increased both identification and verification rates.  相似文献   

19.
Presenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer‐generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per‐pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per‐pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes.  相似文献   

20.
Acquired 3D point clouds make possible quick modeling of virtual scenes from the real world. With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the light transport hidden in the recorded per‐sample color response to relight virtual objects in visual effects (VFX) look‐dev or augmented reality (AR) scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real‐time Point‐Based Global Illumination (PBGI). First, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene that can cover part of it. We perform this expansion efficiently by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions not covered by the renderings or with low‐quality dynamic range by solving a Poisson system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G‐buffers, to achieve real‐time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically‐based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step from the perfect ground truth. We also report experiments with real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号