首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We present a novel method to estimate an approximation of the reflectance characteristics of optically thick, homogeneous translucent materials using only a single photograph as input. First, we approximate the diffusion profile as a linear combination of piecewise constant functions, an approach that enables a linear system minimization and maximizes robustness in the presence of suboptimal input data inferred from the image. We then fit to a smoother monotonically decreasing model, ensuring continuity on its first derivative. We show the feasibility of our approach and validate it in controlled environments, comparing well against physical measurements from previous works. Next, we explore the performance of our method in uncontrolled scenarios, where neither lighting nor geometry are known. We show that these can be roughly approximated from the corresponding image by making two simple assumptions: that the object is lit by a distant light source and that it is globally convex, allowing us to capture the visual appearance of the photographed material. Compared with previous works, our technique offers an attractive balance between visual accuracy and ease of use, allowing its use in a wide range of scenarios including off‐the‐shelf, single images, thus extending the current repertoire of real‐world data acquisition techniques.  相似文献   

2.
The appearance manifold [WTL*06] is an efficient approach for modeling and editing time‐variant appearance of materials from the BRDF data captured at single time instance. However, this method is difficult to apply in images in which weathering and shading variations are combined. In this paper, we present a technique for modeling and editing the weathering effects of an object in a single image with appearance manifolds. In our approach, we formulate the input image as the product of reflectance and illuminance. An iterative method is then developed to construct the appearance manifold in color space (i.e., Lab space) for modeling the reflectance variations caused by weathering. Based on the appearance manifold, we propose a statistical method to robustly decompose reflectance and illuminance for each pixel. For editing, we introduce a “pixel‐walking” scheme to modify the pixel reflectance according to its position on the manifold, by which the detailed reflectance variations are well preserved. We illustrate our technique in various applications, including weathering transfer between two images that is first enabled by our technique. Results show that our technique can produce much better results than existing methods, especially for objects with complex geometry and shading effects.  相似文献   

3.
Automatic decomposition of intrinsic images, especially for complex real‐world images, is a challenging under‐constrained problem. Thus, we propose a new algorithm that generates and combines multi‐scale properties of chromaticity differences and intensity contrast. The key observation is that the estimation of image reflectance, which is neither a pixel‐based nor a region‐based property, can be improved by using multi‐scale measurements of image content. The new algorithm iteratively coarsens a graph reflecting the reflectance similarity between neighbouring pixels. Then multi‐scale reflectance properties are aggregated so that the graph reflects the reflectance property at different scales. This is followed by a L0 sparse regularization on the whole reflectance image, which enforces the variation in reflectance images to be high‐frequency and sparse. We formulate this problem through energy minimization which can be solved efficiently within a few iterations. The effectiveness of the new algorithm is tested with the Massachusetts Institute of Technology (MIT) dataset, the Intrinsic Images in the Wild (IIW) dataset, and various natural images.  相似文献   

4.
A popular approach for computing photorealistic images of virtual objects requires applying reflectance profiles measured from real surfaces, introducing several challenges: the memory needed to faithfully capture realistic material reflectance is large, the choice of materials is limited to the set of measurements, and image synthesis using the measured data is costly. Typically, this data is either compressed by projecting it onto a subset of its linear principal components or by applying non‐linear methods. The former requires many components to faithfully represent the input reflectance, whereas the latter necessitates costly extrapolation algorithms. We learn an underlying, low‐dimensional non‐linear reflectance manifold amenable to rapid exploration and rendering of real‐world materials. We can express interpolated materials as linear combinations of the measured data, despite them lying on an inherently non‐linear manifold. This allows us to efficiently interpolate and extrapolate measured BRDFs, and to render directly from the manifold representation. We exploit properties of Gaussian process latent variable models and use our representation for high‐performance and offline rendering with interpolated real‐world materials.  相似文献   

5.
6.
We address the problem of jointly estimating the scene illumination, the radiometric camera calibration and the reflectance properties of an object using a set of images from a community photo collection. The highly ill-posed nature of this problem is circumvented by using appropriate representations of illumination, an empirical model for the nonlinear function that relates image irradiance with intensity values and additional assumptions on the surface reflectance properties. Using a 3D model recovered from an unstructured set of images, we estimate the coefficients that represent the illumination for each image using a frequency framework. For each image, we also compute the corresponding camera response function. Additionally, we calculate a simple model for the reflectance properties of the 3D model. A robust non-linear optimization is proposed exploiting the high sparsity present in the problem.  相似文献   

7.
Empowered by deep learning, recent methods for material capture can estimate a spatially‐varying reflectance from a single photograph. Such lightweight capture is in stark contrast with the tens or hundreds of pictures required by traditional optimization‐based approaches. However, a single image is often simply not enough to observe the rich appearance of real‐world materials. We present a deep‐learning method capable of estimating material appearance from a variable number of uncalibrated and unordered pictures captured with a handheld camera and flash. Thanks to an order‐independent fusing layer, this architecture extracts the most useful information from each picture, while benefiting from strong priors learned from data. The method can handle both view and light direction variation without calibration. We show how our method improves its prediction with the number of input pictures, and reaches high quality reconstructions with as little as 1 to 10 images ‐ a sweet spot between existing single‐image and complex multi‐image approaches.  相似文献   

8.
In this paper we show how to estimate facial surface reflectance properties (a slice of the BRDF and the albedo) in conjunction with the facial shape from a single image. The key idea underpinning our approach is to iteratively interleave the two processes of estimating reflectance properties based on the current shape estimate and updating the shape estimate based on the current estimate of the reflectance function. For frontally illuminated faces, the reflectance properties can be described by a function of one variable which we estimate by fitting a curve to the scattered and noisy reflectance samples provided by the input image and estimated shape. For non-frontal illumination, we fit a smooth surface to the scattered 2D reflectance samples. We make use of a novel statistical face shape constraint which we term ‘model-based integrability’ which we use to regularise the shape estimation. We show that the method is capable of recovering accurate shape and reflectance information from single grayscale or colour images using both synthetic and real world imagery. We use the estimated reflectance measurements to render synthetic images of the face in varying poses. To synthesise images under novel illumination, we show how to fit a parametric model of reflectance to the estimated reflectance function.  相似文献   

9.
In this paper, we present a new method for removing shadows from images. First, shadows are detected by interactive brushing assisted with a Gaussian Mixture Model. Secondly, the detected shadows are removed using an adaptive illumination transfer approach that accounts for the reflectance variation of the image texture. The contrast and noise levels of the result are then improved with a multi‐scale illumination transfer technique. Finally, any visible shadow boundaries in the image can be eliminated based on our Bayesian framework. We also extend our method to video data and achieve temporally consistent shadow‐free results.  相似文献   

10.
Radiometric compensation methods remove the effect of the underlying spatially varying surface reflectance of the texture when projecting on textured surfaces. All prior work sample the surface reflectance dependent radiometric transfer function from the projector to the camera at every pixel that requires the camera to observe tens or hundreds of images projected by the projector. In this paper, we cast the radiometric compensation problem as a sampling and reconstruction of multi‐dimensional radiometric transfer function that models the color transfer function from the projector to an observing camera and the surface reflectance in a unified manner. Such a multi‐dimensional representation makes no assumption about linearity of the projector to camera color transfer function and can therefore handle projectors with non‐linear color transfer functions(e.g. DLP, LCOS, LED‐based or laser‐based). We show that with a well‐curated sampling of this multi‐dimensional function, achieved by exploiting the following key properties, is adequate for its accurate representation: (a) the spectral reflectance of most real‐world materials are smooth and can be well‐represented using a lower‐dimension function; (b) the reflectance properties of the underlying texture have strong redundancies – for example, multiple pixels or even regions can have similar surface reflectance; (c) the color transfer function from the projector to camera have strong input coherence. The proposed sampling allows us to reduce the number of projected images that needs to be observed by a camera by up to two orders of magnitude, the minimum being only two. We then present a new multi‐dimensional scattered data interpolation technique to reconstruct the radiometric transfer function at a high spatial density (i.e. at every pixel) to compute the compensation image. We show that the accuracy of our interpolation technique is higher than any existing methods.  相似文献   

11.
Recovering intrinsic images from a single image   总被引:3,自引:0,他引:3  
Interpreting real-world images requires the ability distinguish the different characteristics of the scene that lead to its final appearance. Two of the most important of these characteristics are the shading and reflectance of each point in the scene. We present an algorithm that uses multiple cues to recover shading and reflectance intrinsic images from a single image. Using both color information and a classifier trained to recognize gray-scale patterns, given the lighting direction, each image derivative is classified as being caused by shading or a change in the surface's reflectance. The classifiers gather local evidence about the surface's form and color, which is then propagated using the Generalized Belief Propagation algorithm. The propagation step disambiguates areas of the image where the correct classification is not clear from local evidence. We use real-world images to demonstrate results and show how each component of the system affects the results.  相似文献   

12.
Intrinsic images are a mid‐level representation of an image that decompose the image into reflectance and illumination layers. The reflectance layer captures the color/texture of surfaces in the scene, while the illumination layer captures shading effects caused by interactions between scene illumination and surface geometry. Intrinsic images have a long history in computer vision and recently in computer graphics, and have been shown to be a useful representation for tasks ranging from scene understanding and reconstruction to image editing. In this report, we review and evaluate past work on this problem. Specifically, we discuss each work in terms of the priors they impose on the intrinsic image problem. We introduce a new synthetic ground‐truth dataset that we use to evaluate the validity of these priors and the performance of the methods. Finally, we evaluate the performance of the different methods in the context of image‐editing applications.  相似文献   

13.
This paper proposes a new approach for color transfer between two images. Our method is unique in its consideration of the scene illumination and the constraint that the mapped image must be within the color gamut of the target image. Specifically, our approach first performs a white‐balance step on both images to remove color casts caused by different illuminations in the source and target image. We then align each image to share the same ‘white axis’ and perform a gradient preserving histogram matching technique along this axis to match the tone distribution between the two images. We show that this illuminant‐aware strategy gives a better result than directly working with the original source and target image's luminance channel as done by many previous methods. Afterwards, our method performs a full gamut‐based mapping technique rather than processing each channel separately. This guarantees that the colors of our transferred image lie within the target gamut. Our experimental results show that this combined illuminant‐aware and gamut‐based strategy produces more compelling results than previous methods. We detail our approach and demonstrate its effectiveness on a number of examples.  相似文献   

14.
Acquired above variable clouds, aerial images contain the components of ground reflection and cloud effect. Due to the non‐uniformity, clouds in aerial images are even harder to remove than haze in terrestrial images. This paper proposes a divide‐and‐conquer scheme to remove the thin translucent clouds in a single RGB aerial image. Based on colour attenuation prior, we design a kind of veiling metric that indicates the local concentration of clouds effectively. By this metric, an aerial image containing thickness‐varied clouds is segmented into multiple regions. Each region is veiled by clouds of nearly‐equal concentration, and hence subject to common assumptions, such as boundary constraint on transmission. The atmospheric light in each region is estimated by the modified local colour‐line model and composed into a spatially‐varying airlight map for the entire image. Then scene transmission is estimated and further refined by a weighted ‐norm based contextual regularization. Finally, we recover ground reflection via the atmospheric scattering model. We verify our cloud removal method on a number of aerial images containing thin clouds and compare our results with classical single‐image dehazing methods and the state‐of‐the‐art learning‐based declouding method, respectively.  相似文献   

15.
Collaborative slide image viewing systems are becoming increasingly important in pathology applications such as telepathology and E‐learning. Despite rapid advances in computing and imaging technology, current digital pathology systems have limited performance with respect to remote viewing of whole slide images on desktop or mobile computing devices. In this paper we present a novel digital pathology client–server system that supports collaborative viewing of multi‐plane whole slide images over standard networks using multi‐touch‐enabled clients. Our system is built upon a standard HTTP web server and a MySQL database to allow multiple clients to exchange image and metadata concurrently. We introduce a domain‐specific image‐stack compression method that leverages real‐time hardware decoding on mobile devices. It adaptively encodes image stacks in a decorrelated colour space to achieve extremely low bitrates (0.8 bpp) with very low loss of image quality. We evaluate the image quality of our compression method and the performance of our system for diagnosis with an in‐depth user study.  相似文献   

16.
ABSTRACT

We introduce a cost-effective reflectance calibration method for small unmanned aerial vehicle (sUAV) images using ethylene-vinyl acetate (EVA) greyscale reference panels. The goal is to test if such light-weight and low-cost panels can provide sufficient calibration accuracy to support UAV survey projects. The universal calibration equations to convert red-green-blue (RGB) digital number (DN) values of UAV images to surface reflectance values were constructed based on the relationship between RGB values measured by a colour digitizer and surface reflectance values measured by a spectrometer. We compared the calibration results for UAV ortho-mosaic images acquired at three different illumination conditions in late autumn to the results derived from high-cost commercial panels. The comparison showed high degree of agreement between our method using the EVA panels with the traditional methods using the commercial panels. The Mann–Whitney U test verified our method was statistically more significant at all illumination conditions tests. In addition, the calibration results applied for two different sensors and three different flight altitudes acquired in early summer were satisfactory. This method is transferable to various illumination conditions and flight altitudes as long as the effects of shades and the bidirectional reflectance distribution function (BRDF) are minimal. We expect our research could expedite sUAV image calibration by lowering its cost and levelling its availability.  相似文献   

17.
We present a fast reconstruction filtering method for images generated with Monte Carlo–based rendering techniques. Our approach specializes in reducing global illumination noise in the presence of depth‐of‐field effects at very low sampling rates and interactive frame rates. We employ edge‐aware filtering in the sample space to locally improve outgoing radiance of each sample. The improved samples are then distributed in the image plane using a fast, linear manifold‐based approach supporting very large circles of confusion. We evaluate our filter by applying it to several images containing noise caused by Monte Carlo–simulated global illumination, area light sources and depth of field. We show that our filter can efficiently denoise such images at interactive frame rates on current GPUs and with as few as 4–16 samples per pixel. Our method operates only on the colour and geometric sample information output of the initial rendering process. It does not make any assumptions on the underlying rendering technique and sampling strategy and can therefore be implemented completely as a post‐process filter.  相似文献   

18.
In this paper, we present a general variational method for image fusion. In particular, we combine different images of the same subject to a single composite that offers optimal exposedness, saturation and local contrast. Previous research approaches this task by first pre‐computing application‐specific weights based on the input, and then combining these weights with the images to the final composite later on. In contrast, we design our model assumptions directly on the fusion result. To this end, we formulate the output image as a convex combination of the input and incorporate concepts from perceptually inspired contrast enhancement such as a local and non‐linear response. This output‐driven approach is the key to the versatility of our general image fusion model. In this regard, we demonstrate the performance of our fusion scheme with several applications such as exposure fusion, multispectral imaging and decolourization. For all application domains, we conduct thorough validations that illustrate the improvements compared to state‐of‐the‐art approaches that are tailored to the individual tasks.  相似文献   

19.
Many casually taken ‘tourist’ photographs comprise of architectural objects like houses, buildings, etc. Reconstructing such 3D scenes captured in a single photograph is a very challenging problem. We propose a novel approach to reconstruct such architectural scenes with minimal and simple user interaction, with the goal of providing 3D navigational capability to an image rather than acquiring accurate geometric detail. Our system, Peek‐in‐the‐Pic, is based on a sketch‐based geometry reconstruction paradigm. Given an image, the user simply traces out objects from it. Our system regards these as perspective line drawings, automatically completes them and reconstructs geometry from them. We make basic assumptions about the structure of traced objects and provide simple gestures for placing additional constraints. We also provide a simple sketching tool to progressively complete parts of the reconstructed buildings that are not visible in the image and cannot be automatically completed. Finally, we fill holes created in the original image when reconstructed buildings are removed from it, by automatic texture synthesis. Users can spend more time using interactive texture synthesis for further refining the image. Thus, instead of looking at flat images, a user can fly through them after some simple processing. Minimal manual work, ease of use and interactivity are the salient features of our approach.  相似文献   

20.
Photographers routinely compose multiple manipulated photos of the same scene into a single image, producing a fidelity difficult to achieve using any individual photo. Alternately, 3D artists set up rendering systems to produce layered images to isolate individual aspects of the light transport, which are composed into the final result in post‐production. Regrettably, these approaches either take considerable time and effort to capture, or remain limited to synthetic scenes. In this paper, we suggest a method to decompose a single image into multiple layers that approximates effects such as shadow, diffuse illumination, albedo, and specular shading. To this end, we extend the idea of intrinsic images along two axes: first, by complementing shading and reflectance with specularity and occlusion, and second, by introducing directional dependence. We do so by training a convolutional neural network (CNN) with synthetic data. Such decompositions can then be manipulated in any off‐the‐shelf image manipulation software and composited back. We demonstrate the effectiveness of our decomposition on synthetic (i. e., rendered) and real data (i. e., photographs), and use them for photo manipulations, which are otherwise impossible to perform based on single images. We provide comparisons with state‐of‐the‐art methods and also evaluate the quality of our decompositions via a user study measuring the effectiveness of the resultant photo retouching setup. Supplementary material and code are available for research use at geometry.cs.ucl.ac.uk/projects/2017/layered-retouching .  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号