首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Edge‐preserving image filtering is a valuable tool for a variety of applications in image processing and computer vision. Motivated by a new simple but effective local Laplacian filter, we propose a scalable and efficient image filtering framework to extend this edge‐preserving image filter and construct an uniform implementation in O (N) time. The proposed framework is built upon a practical global‐to‐local strategy. The input image is first remapped globally by a series of tentative remapping functions to generate a virtual candidate image sequence (Virtual Image Pyramid Sequence, VIPS). This sequence is then recombined locally to a single output image by a flexible edge‐aware pixel‐level fusion rule. To avoid halo artifacts, both the output image and the virtual candidate image sequence are transformed into multi‐resolution pyramid representations. Four examples, single image dehazing, multi‐exposure fusion, fast edge‐preserving filtering and tone‐mapping, are presented as the concrete applications of the proposed framework. Experiments on filtering effect and computational efficiency indicate that the proposed framework is able to build a wide range of fast image filtering that yields visually compelling results.  相似文献   

2.
In this work we present a new algorithm for accelerating the colour bilateral filter based on a subsampling strategy working in the spatial domain. The base idea is to use a suitable subset of samples of the entire kernel in order to obtain a good estimation of the exact filter values. The main advantages of the proposed approach are that it has an excellent trade‐off between visual quality and speed‐up, a very low memory overhead is required and it is straightforward to implement on the GPU allowing real‐time filtering. We show different applications of the proposed filter, in particular efficient cross‐bilateral filtering, real‐time edge‐aware image editing and fast video denoising. We compare our method against the state of the art in terms of image quality, time performance and memory usage.  相似文献   

3.
Figure‐ground segmentation from bounding box input, provided either automatically or manually, has been extremely popular in the last decade and influenced various applications. A lot of research has focused on high‐quality segmentation, using complex formulations which often lead to slow techniques, and often hamper practical usage. In this paper we demonstrate a very fast segmentation technique which still achieves very high quality results. We propose to replace the time consuming iterative refinement of global colour models in traditional GrabCut formulation by a densely connected crf . To motivate this decision, we show that a dense crf implicitly models unnormalized global colour models for foreground and background. Such relationship provides insightful analysis to bridge between dense crf and GrabCut functional. We extensively evaluate our algorithm using two famous benchmarks. Our experimental results demonstrated that the proposed algorithm achieves an order of magnitude (10×) speed‐up with respect to the closest competitor, and at the same time achieves a considerably higher accuracy.  相似文献   

4.
In this paper we present an image‐based algorithm to render visually plausible anti‐aliased soft shadows in real time. Our technique employs a new shadow pre‐filtering method based on an extended exponential shadow mapping theory. The algorithm achieves faithful contact shadows by adopting an optimal approximation to exponential shadow reconstruction function. Benefiting from a novel overflow free summed area table tile grid data structure, numerical stability is guaranteed and error filtering response is avoided. By integrating an adaptive anisotropic filtering method, the proposed algorithm can produce high quality smooth shadows both in large penumbra areas and in high frequency sharp transitions, meanwhile guarantee cheap memory consumption and high performance.  相似文献   

5.
: The extension of concepts of greyscale morphology to colour image processing requires the use of a proper ordering of vectors (colours) and the definitions of infimum and supremum operators in an appropriate colour space. In this paper, a new approach to colour image morphology is proposed. It is based on a new ordering of vectors in the HSV colour space that is partial ordering. The proposed approach is hue preserving, and it is not a component-wise technique. Its basic characteristic is that it is compatible to the standard greyscale morphology: its fundamental and secondary operations possess the same basic properties as their greyscale counterparts, and furthermore, it is identical to greyscale morphology when it is applied to greyscale images. Examples that illustrate the application of the defined operations to colour images are provided. Moreover, the usefulness of the new method in various colour image processing applications, such as colour image edge detection, object recognition, vector top-hat filtering and skeleton extraction, is demonstrated. Received: 14 July 2000, Received in revised form: 24 April 2001, Accepted: 19 June 2001  相似文献   

6.
In this paper, we propose a highly accurate inpainting algorithm which reconstructs an image from a fraction of its pixels. Our algorithm is inspired by the recent progress of non‐local image processing techniques following the idea of ‘grouping and collaborative filtering’. In our framework, we first match and group similar patches in the input image, and then convert the problem of estimating missing values for the stack of matched patches to the problem of low‐rank matrix completion, and finally obtain the result by synthesizing all the restored patches. In our algorithm, how to accurately perform patch matching process and solve the low‐rank matrix completion problem are key points. For the first problem, we propose a robust patch matching approach, and for the second task, the alternating direction method of multipliers is employed. Experiments show that our algorithm has superior advantages over existing inpainting techniques. Besides, our algorithm can be easily extended to handle practical applications including rendering acceleration, photo restoration and object removal.  相似文献   

7.
Creating and animating subject‐specific anatomical models is traditionally a difficult process involving medical image segmentation, geometric corrections and the manual definition of kinematic parameters. In this paper, we introduce a novel template morphing algorithm that facilitates three‐dimensional modelling and parameterization of skeletons. Target data can be either medical images or surfaces of the whole skeleton. We incorporate prior knowledge about bone shape, the feasible skeleton pose and the morphological variability in the population. This allows for noise reduction, bone separation and the transfer, from the template, of anatomical and kinematical information not present in the input data. Our approach treats both local and global deformations in successive regularization steps: smooth elastic deformations are represented by an as‐rigid‐as‐possible displacement field between the reference and current configuration of the template, whereas global and discontinuous displacements are estimated through a projection onto a statistical shape model and a new joint pose optimization scheme with joint limits.  相似文献   

8.
This paper presents a quick and simple method for converting complex images and video to perceptually accurate greyscale versions. We use a two‐step approach first to globally assign grey values and determine colour ordering, then second, to locally enhance the greyscale to reproduce the original contrast. Our global mapping is image independent and incorporates the Helmholtz‐Kohlrausch colour appearance effect for predicting differences between isoluminant colours. Our multiscale local contrast enhancement reintroduces lost discontinuities only in regions that insufficiently represent original chromatic contrast. All operations are restricted so that they preserve the overall image appearance, lightness range and differences, colour ordering, and spatial details, resulting in perceptually accurate achromatic reproductions of the colour original.  相似文献   

9.
10.
We propose a method that improves automatic colour correction operations for rendered images. In particular, we propose a robust technique for estimating the visible and pertinent illumination in a given scene. We do this at very low computational cost by mostly re-using information that is already being computed during the image synthesis process. Conventional illuminant estimations either operate only on 2D image data, or, if they do go beyond pure image analysis, only use information on the luminaires found in the scene. The latter is usually done with little or no regard for how the light sources actually affect the part of the scene that is being viewed. Our technique goes beyond that, and also takes object reflectance into account, as well as the incident light that is actually responsible for the colour of the objects that one sees. It is therefore able to cope with difficult cases, such as scenes with mixed illuminants, complex scenes with many light sources of varying colour, or strongly coloured indirect illumination.  相似文献   

11.
Point cloud data is one of the most common types of input for geometric processing applications. In this paper, we study the point cloud density adaptation problem that underlies many pre‐processing tasks of points data. Specifically, given a (sparse) set of points Q sampling an unknown surface and a target density function, the goal is to adapt Q to match the target distribution. We propose a simple and robust framework that is effective at achieving both local uniformity and precise global density distribution control. Our approach relies on the Gaussian‐weighted graph Laplacian and works purely in the points setting. While it is well known that graph Laplacian is related to mean‐curvature flow and thus has denoising ability, our algorithm uses certain information encoded in the graph Laplacian that is orthogonal to the mean‐curvature flow. Furthermore, by leveraging the natural scale parameter contained in the Gaussian kernel and combining it with a simulated annealing idea, our algorithm moves points in a multi‐scale manner. The resulting algorithm relies much less on the input points to have a good initial distribution (neither uniform nor close to the target density distribution) than many previous refinement‐based methods. We demonstrate the simplicity and effectiveness of our algorithm with point clouds sampled from different underlying surfaces with various geometric and topological properties.  相似文献   

12.
We present a novel framework for efficiently computing the indirect illumination in diffuse and moderately glossy scenes using density estimation techniques. Many existing global illumination approaches either quickly compute an overly approximate solution or perform an orders of magnitude slower computation to obtain high-quality results for the indirect illumination. The proposed method improves photon density estimation and leads to significantly better visual quality in particular for complex geometry, while only slightly increasing the computation time. We perform direct splatting of photon rays, which allows us to use simpler search data structures. Since our density estimation is carried out in ray space rather than on surfaces, as in the commonly used photon mapping algorithm, the results are more robust against geometrically incurred sources of bias. This holds also in combination with final gathering where photon mapping often overestimates the illumination near concave geometric features. In addition, we show that our photon splatting technique can be extended to handle moderately glossy surfaces and can be combined with traditional irradiance caching for sparse sampling and filtering in image space.  相似文献   

13.
Physically based rendering systems often support spectral rendering to simulate light transport in the real world. Material representations in such simulations need to be defined as spectral distributions. Since commonly available material data are in tristimulus colours, we ideally would like to obtain spectral distributions from tristimulus colours as an input to spectral rendering systems. Reproduction of spectral distributions given tristimulus colours, however, has been considered an ill‐posed problem since single tristimulus colour corresponds to a set of different spectra due to metamerism. We show how to resolve this problem using a data‐driven approach based on measured spectra and propose a practical algorithm that can faithfully reproduce a corresponding spectrum only from the given tristimulus colour. The key observation in colour science is that a natural measured spectrum is usually well approximated by a weighted sum of a few basis functions. We show how to reformulate conversion of tristimulus colours to spectra via principal component analysis. To improve accuracy of conversion, we propose a greedy clustering algorithm which minimizes reconstruction error. Using pre‐computation, the runtime computation is just a single matrix multiplication with an input tristimulus colour. Numerical experiments show that our method well reproduces the reference measured spectra using only the tristimulus colours as input.  相似文献   

14.
In this paper, we present a novel exemplar‐based technique for the interpolation between two textures that combines patch‐based and statistical approaches. Motivated by the notion of texture as a largely local phenomenon, we warp and blend small image neighborhoods prior to patch‐based texture synthesis. In addition, interpolating and enforcing characteristic image statistics faithfully handles high frequency detail. We are able to create both intermediate textures as well as continuous transitions. In contrast to previous techniques computing a global morphing transformation on the entire input exemplar images, our localized and patch‐based approach allows us to successfully interpolate between textures with considerable differences in feature topology for which no smooth global warping field exists.  相似文献   

15.
Content‐aware image retargeting is a technique that can flexibly display images with different aspect ratios and simultaneously preserve salient regions in images. Recently many image retargeting techniques have been proposed. To compare image quality by different retargeting methods fast and reliably, an objective metric simulating the human vision system (HVS) is presented in this paper. Different from traditional objective assessment methods that work in bottom‐up manner (i.e., assembling pixel‐level features in a local‐to‐global way), in this paper we propose to use a reverse order (top‐down manner) that organizes image features from global to local viewpoints, leading to a new objective assessment metric for retargeted images. A scale‐space matching method is designed to facilitate extraction of global geometric structures from retargeted images. By traversing the scale space from coarse to fine levels, local pixel correspondence is also established. The objective assessment metric is then based on both global geometric structures and local pixel correspondence. To evaluate color images, CIE L*a*b* color space is utilized. Experimental results are obtained to measure the performance of objective assessments with the proposed metric. The results show good consistency between the proposed objective metric and subjective assessment by human observers.  相似文献   

16.
Because of its versatility, speed and robustness, shadow mapping has always been a popular algorithm for fast hard shadow generation since its introduction in 1978, first for offline film productions and later increasingly so in real‐time graphics. So it is not surprising that recent years have seen an explosion in the number of shadow map related publications. Because of the abundance of articles on the topic, it has become very hard for practitioners and researchers to select a suitable shadow algorithm, and therefore many applications miss out on the latest high‐quality shadow generation approaches. The goal of this survey is to rectify this situation by providing a detailed overview of this field. We show a detailed analysis of shadow mapping errors and derive a comprehensive classification of the existing methods. We discuss the most influential algorithms, consider their benefits and shortcomings and thereby provide the readers with the means to choose the shadow algorithm best suited to their needs.  相似文献   

17.
Distribution effects such as diffuse global illumination, soft shadows and depth of field, are most accurately rendered using Monte Carlo ray or path tracing. However, physically accurate algorithms can take hours to converge to a noise‐free image. A recent body of work has begun to bridge this gap, showing that both individual and multiple effects can be achieved accurately and efficiently. These methods use sparse sampling, GPU raytracers, and adaptive filtering for reconstruction. They are based on a Fourier analysis, which models distribution effects as a wedge in the frequency domain. The wedge can be approximated as a single large axis‐aligned filter, which is fast but retains a large area outside the wedge, and therefore requires a higher sampling rate; or a tighter sheared filter, which is slow to compute. The state‐of‐the‐art fast sheared filtering method combines low sampling rate and efficient filtering, but has been demonstrated for individual distribution effects only, and is limited by high‐dimensional data storage and processing. We present a novel filter for efficient rendering of combined effects, involving soft shadows and depth of field, with global (diffuse indirect) illumination. We approximate the wedge spectrum with multiple axis‐aligned filters, marrying the speed of axis‐aligned filtering with an even more accurate (compact and tighter) representation than sheared filtering. We demonstrate rendering of single effects at comparable sampling and frame‐rates to fast sheared filtering. Our main practical contribution is in rendering multiple distribution effects, which have not even been demonstrated accurately with sheared filtering. For this case, we present an average speedup of 6× compared with previous axis‐aligned filtering methods.  相似文献   

18.
We present an automatic method to recover high‐resolution texture over an object by mapping detailed photographs onto its surface. Such high‐resolution detail often reveals inaccuracies in geometry and registration, as well as lighting variations and surface reflections. Simple image projection results in visible seams on the surface. We minimize such seams using a global optimization that assigns compatible texture to adjacent triangles. The key idea is to search not only combinatorially over the source images, but also over a set of local image transformations that compensate for geometric misalignment. This broad search space is traversed using a discrete labeling algorithm, aided by a coarse‐to‐fine strategy. Our approach significantly improves resilience to acquisition errors, thereby allowing simple and easy creation of textured models for use in computer graphics.  相似文献   

19.
We present an image processing method that converts a raster image to a simplical two‐complex which has only a small number of vertices (base mesh) plus a parametrization that maps each pixel in the original image to a combination of the barycentric coordinates of the triangle it is finally mapped into. Such a conversion of a raster image into a base mesh plus parametrization can be useful for many applications such as segmentation, image retargeting, multi‐resolution editing with arbitrary topologies, edge preserving smoothing, compression, etc. The goal of the algorithm is to produce a base mesh such that it has a small colour distortion as well as high shape fairness, and a parametrization that is globally continuous visually and numerically. Inspired by multi‐resolution adaptive parametrization of surfaces and quadric error metric, the algorithm converts pixels in the image to a dense triangle mesh and performs error‐bounded simplification jointly considering geometry and colour. The eliminated vertices are projected to an existing face. The implementation is iterative and stops when it reaches a prescribed error threshold. The algorithm is feature‐sensitive, i.e. salient feature edges in the images are preserved where possible and it takes colour into account thereby producing a better quality triangulation.  相似文献   

20.
We present a new algorithm for efficient rendering of high‐quality depth‐of‐field (DoF) effects. We start with a single rasterized view (reference view) of the scene, and sample the light field by warping the reference view to nearby views. We implement the algorithm using NVIDIA's CUDA to achieve parallel processing, and exploit the atomic operations to resolve visibility when multiple pixels warp to the same image location. We then directly synthesize DoF effects from the sampled light field. To reduce aliasing artifacts, we propose an image‐space filtering technique that compensates for spatial undersampling using MIP mapping. The main advantages of our algorithm are its simplicity and generality. We demonstrate interactive rendering of DoF effects in several complex scenes. Compared to existing methods, ours does not require ray tracing and hence scales well with scene complexity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号