首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
We present a technique implementing space-variant filtering of an image, with kernels belonging to a given family, in time independent of the size and shape of the filter kernel support. The essence of our method is efficient approximation of these kernels, belonging to an infinite family governed by a small number of parameters, as a linear combination of a small number k of “basis” kernels. The accuracy of this approximation increases with k, and requires O(k) storage space. Any kernel in the family may be applied to the image in O(k) time using precomputed results of the application of the basis kernels. Performing linear combinations of these values with appropriate coefficients yields the desired result. A trade off between algorithm efficiency and approximation quality is obtained by adjusting k. The basis kernels are computed using singular value decomposition, distinguishing this from previous techniques designed to achieve a similar effect. We illustrate by applying our methods to the family of elliptic Gaussian kernels, a popular choice for filtering warped images.  相似文献   

2.
Mode estimation is extensively studied in statistics. One of the most widely used methods of mode estimation is hill-climbing on a kernel density estimator with gradient ascent or a fixed-point approach. Within this framework, Gaussian kernels proves to be a natural and intuitive option for non-parametric density estimation. This paper shows that in the case of high-dimensional data, mode estimation can be improved by using differently shaped kernels, called flat-top kernels. The improvement are illustrated with an image denoising application, in which pictures are decomposed into small patches, i.e. groups of adjacent pixels, that are vectorized. Noise in the patches can be attenuated by substituting them with the closest mode in the observed distribution of patches. The quality of the denoised picture then depends on the accuracy of mode estimation in a high-dimensional space. Experiments conducted on usual benchmarks in the image processing community show that flat-top kernels outperform the Gaussian one.  相似文献   

3.
Curvature-based surface features are well suited for use in multimodal medical image registration. The accuracy of such feature-based registration techniques is dependent upon the reliability of the feature computation. The computation of curvature features requires second derivative information that is best obtained from a parametric surface representation. We present a method of explicitly parameterizing surfaces from volumetric data. Surfaces are extracted, without a global thresholding, using active contour models. A monge/spl acute/ basis for each surface patch is estimated and used to transform the patch into local, or parametric, coordinates. Surface patches are fit to a bicubic polynomial in local coordinates using least squares solved by singular value decomposition. We tested our method by reconstructing surfaces from the surface model and analytically computing Gaussian and mean curvatures. The model was tested on analytical and medical data.  相似文献   

4.
It is difficult to render caustic patterns at interactive frame rates. This paper introduces new rendering techniques that relax current constraints, allowing scenes with moving, non-rigid scene objects, rigid caustic objects, and rotating directional light sources to be rendered in real-time with GPU hardware acceleration. Because our algorithm estimates the intensity and the direction of caustic light, rendering of non-Lambertian surfaces is supported. Previous caustics algorithms have separated the problem into pre-rendering and rendering phases, storing intermediate results in data structures such as photon maps or radiance transfer functions. Our central idea is to use specially parameterized spot lights, called caustic spot lights (CSLs), as the intermediate representation of a two-phase algorithm. CSLs are flexible enough that a small number can approximate the light leaving a caustic object, yet simple enough that they can be efficiently evaluated by a pixel shader program during accelerated rendering.We extend our approach to support changing lighting direction by further dividing the pre-rendering phase into per-scene and per-frame components: the per-frame phase computes frame-specific CSLs by interpolating between CSLs that were pre-computed with differing light directions.  相似文献   

5.
We present a physically based progressive global illumination system that is capable of simulating complex lighting situations robustly by efficiently using both light and eye paths. Specifically, we combine three distinct algorithms: point‐light‐based illumination which produces low‐noise approximations for diffuse inter‐reflections, specular gathering for glossy and singular effects and a caustic histogram method for the remaining light paths. The combined system efficiently renders low‐noise production quality images with indirect illumination from arbitrary light sources including inter‐reflections from caustics and allows for simulating depth of field and dispersion effects. Our system computes progressive approximations by continuously refining the solution using a constant memory footprint without the need of pre‐computations or optimizing parameters beforehand.  相似文献   

6.
Skinning, also called lofting, is a powerful and popular method for modeling complex shapes. A surface modeled by the current skinning techniques nevertheless may be far from being developable, which is an important property desired in the manufacturing industry such as ship-hull, wing and body of aircraft, garment, etc. In this paper, a novel approach to skinning surface modeling is proposed. The proposed method interpolates the given curves with a collection of G1 continuous self-defined triangular patches, and these patches are assembled together by globally minimizing the integral Gaussian curvature, i.e., the degree of developability. The proposed algorithm has been tested on a set of examples and the test results have demonstrated its promising use in a variety of applications.  相似文献   

7.
In this work, we introduce a novel algorithm for transient rendering in participating media. Our method is consistent, robust and is able to generate animations of time‐resolved light transport featuring complex caustic light paths in media. We base our method on the observation that the spatial continuity provides an increased coverage of the temporal domain, and generalize photon beams to transient‐state. We extend stead‐state photon beam radiance estimates to include the temporal domain. Then, we develop a progressive variant of our approach which provably converges to the correct solution using finite memory by averaging independent realizations of the estimates with progressively reduced kernel bandwidths. We derive the optimal convergence rates accounting for space and time kernels, and demonstrate our method against previous consistent transient rendering methods for participating media.  相似文献   

8.
We present block algorithms and their implementation for the parallelization of sub-cubic Gaussian elimination on shared memory architectures. Contrarily to the classical cubic algorithms in parallel numerical linear algebra, we focus here on recursive algorithms and coarse grain parallelization. Indeed, sub-cubic matrix arithmetic can only be achieved through recursive algorithms making coarse grain block algorithms perform more efficiently than fine grain ones. This work is motivated by the design and implementation of dense linear algebra over a finite field, where fast matrix multiplication is used extensively and where costly modular reductions also advocate for coarse grain block decomposition. We incrementally build efficient kernels, for matrix multiplication first, then triangular system solving, on top of which a recursive PLUQ decomposition algorithm is built. We study the parallelization of these kernels using several algorithmic variants: either iterative or recursive and using different splitting strategies. Experiments show that recursive adaptive methods for matrix multiplication, hybrid recursive–iterative methods for triangular system solve and tile recursive versions of the PLUQ decomposition, together with various data mapping policies, provide the best performance on a 32 cores NUMA architecture. Overall, we show that the overhead of modular reductions is more than compensated by the fast linear algebra algorithms and that exact dense linear algebra matches the performance of full rank reference numerical software even in the presence of rank deficiencies.  相似文献   

9.
Super-resolution (SR) methods are effective for generating a high-resolution image from a single low-resolution image. However, four problems are observed in existing SR methods. (1) They cannot reconstruct many details from a low-resolution infrared image because infrared images always lack detailed information. (2) They cannot extract the desired information from images because they do not consider that images naturally come at different scales in many cases. (3) They fail to reveal different physical structures of low-resolution patch because they extract features from a single view. (4) They fail to extract all the different patterns because they use only one dictionary to represent all patterns. To overcome these problems, we propose a novel SR method for infrared images. First, we combine the information of high-resolution visible light images and low-resolution infrared images to improve the resolution of infrared images. Second, we use multiscale patches instead of fixed-size patches to represent infrared images more accurately. Third, we use different feature vectors rather than a single feature to represent infrared images. Finally, we divide training patches into several clusters, and multiple dictionaries are learned for each cluster to provide each patch with a more accurate dictionary. In the proposed method, clustering information for low-resolution patches is learnt by using fuzzy clustering theory. Experiments validate that the proposed method yields better results in terms of quantization and visual perception than the state-of-the-art algorithms.  相似文献   

10.
We recover 3D models of objects with specular surfaces. An object is rotated and its continuous images are taken. Circular-shaped light sources that generate conic rays are used to illuminate the rotating object in such a way that highlighted stripes can be observed on most of the specular surfaces. Surface shapes can be computed from the motions of highlights in the continuous images; either specular motion stereo or single specular trace mode can be used. When the lights are properly set, each point on the object can be highlighted during the rotation. The shape for each rotation plane is measured independently using its corresponding epipolar plane image. A 3D shape model is subsequently reconstructed by combining shapes at different rotation planes. Computing a shape is simple and requires only the motion of highlight on each rotation plane. The novelty of this paper is the complete modeling of a general type of specular objects that has not been accomplished before  相似文献   

11.
We address the problem of probability density function estimation using a Gaussian mixture model updated with the expectation-maximization (EM) algorithm. To deal with the case of an unknown number of mixing kernels, we define a new measure for Gaussian mixtures, called total kurtosis, which is based on the weighted sample kurtoses of the kernels. This measure provides an indication of how well the Gaussian mixture fits the data. Then we propose a new dynamic algorithm for Gaussian mixture density estimation which monitors the total kurtosis at each step of the EM algorithm in order to decide dynamically on the correct number of kernels and possibly escape from local maxima. We show the potential of our technique in approximating unknown densities through a series of examples with several density estimation problems  相似文献   

12.
When projectors are used to display images on complex, non‐planar surface geometry, indirect illumination between the surfaces will disrupt the final appearance of this imagery, generally increasing brightness, decreasing contrast, and washing out colors. In this paper we predict through global illumination simulation this unintentional indirect component and solve for the optimal compensated projection imagery that will minimize the difference between the desired imagery and the actual total illumination in the resulting physical scene. Our method makes use of quadratic programming to minimize this error within the constraints of the physical system, namely, that negative light is physically impossible. We demonstrate our compensation optimization in both computer simulation and physical validation within a table‐top spatially augmented reality system. We present an application of these results for visualization of interior architectural illumination. To facilitate interactive modifications to the scene geometry and desired appearance, our system is accelerated with a CUDA implementation of the QP optimization method.  相似文献   

13.
We study the recognition of surfaces made from different materials such as concrete, rug, marble, or leather on the basis of their textural appearance. Such natural textures arise from spatial variation of two surface attributes: (1) reflectance and (2) surface normal. In this paper, we provide a unified model to address both these aspects of natural texture. The main idea is to construct a vocabulary of prototype tiny surface patches with associated local geometric and photometric properties. We call these 3D textons. Examples might be ridges, grooves, spots or stripes or combinations thereof. Associated with each texton is an appearance vector, which characterizes the local irradiance distribution, represented as a set of linear Gaussian derivative filter outputs, under different lighting and viewing conditions.Given a large collection of images of different materials, a clustering approach is used to acquire a small (on the order of 100) 3D texton vocabulary. Given a few (1 to 4) images of any material, it can be characterized using these textons. We demonstrate the application of this representation for recognition of the material viewed under novel lighting and viewing conditions. We also illustrate how the 3D texton model can be used to predict the appearance of materials under novel conditions.  相似文献   

14.
This paper proposes a supervised multiscale Bayesian texture classifier. The classifier exploits the dual-tree complex wavelet transform (DT-CWT) to obtain complex-valued multiscale representations of training texture samples for each texture class. The high-pass subbands of DT-CWT decomposition of a texture image are used to form a multiscale feature vector representing magnitude and phase features. For computational efficiency, the dimensionality of feature vectors is reduced using principal component analysis (PCA). The class conditional probability density function of low-dimensional feature vectors for each texture class is then estimated by using Parzen-window estimate with identical Gaussian kernels and is used to represent the texture class. A query texture image is classified as the corresponding texture class with the highest a posteriori probability according to a Bayesian inferencing. The superior performance and robustness of the proposed classifier is demonstrated for classifying texture images from image databases. The proposed multiscale texture feature vector extracted from both magnitude and phase of DT-CWT subbands of a query image is also shown to be effective for texture retrieval.  相似文献   

15.
Developable surfaces have many desired properties in the manufacturing process. Since most existing CAD systems utilize tensor-product parametric surfaces including B-splines as design primitives, there is a great demand in industry to convert a general free-form parametric surface within a prescribed global error bound into developable patches. In this paper, we propose a practical and efficient solution to approximate a rectangular parametric surface with a small set of C 0 -joint developable strips. The key contribution of the proposed algorithm is that, several optimization problems are elegantly solved in a sequence that offers a controllable global error bound on the developable surface approximation. Experimental results are presented to demonstrate the effectiveness and stability of the proposed algorithm.  相似文献   

16.
When using radiosiiy, the visual quality of the rendered images strongly depends on the method employed for discretizing the scene into patches. A too fine discretization may give rise to artifacts, while with a coarse discretization areas with high radiosity gradient may appear. To overcome these problems, the discretization must adapt to the scene. That is, the interaction between two patches must account for the distance between them as well as their surface area. In other words, surfaces far away are discretized less finely than nearby surfaces. These aspects are considered by the new adaptive discretiration method described in this paper. It performs both discretization and system resolution at each iteration of the shooting process, allowing then interactivity.  相似文献   

17.
We present a generalization of the convolution-based variational image registration approach, in which different regularizers can be implemented by conveniently exchanging the convolution kernel, even if it is nonseparable or nonstationary. Nonseparable kernels pose a challenge because they cannot be efficiently implemented by separate 1D convolutions. We propose to use a low-rank tensor decomposition to efficiently approximate nonseparable convolution. Nonstationary kernels pose an even greater challenge because the convolution kernel depends on, and needs to be evaluated for, every point in the image. We propose to pre-compute the local kernels and efficiently store them in memory using the Tucker tensor decomposition model. In our experiments we use the nonseparable exponential kernel and a nonstationary landmark kernel. The exponential kernel replicates desirable properties of elastic image registration, while the landmark kernel incorporates local prior knowledge about corresponding points in the images. We examine the trade-off between the computational resources needed and the approximation accuracy of the tensor decomposition methods. Furthermore, we obtain very smooth displacement fields even in the presence of large landmark displacements.  相似文献   

18.
The present paper describes a novel method of implementation of a stochastic optimization technique for the face recognition problem. The method proposed divides the original images into patches in space, and seeks a non-linear functional mapping using second-order Volterra kernels. The artificial bee colony optimization technique, a modern stochastic optimization algorithm, is used to derive optimal Volterra kernels during training to simultaneously maximize inter-class distances and minimize intra-class distances in the feature space. During testing, a voting procedure is used in conjunction with a nearest neighbor classifier to decide to which class each individual patch belongs. Finally, the aggregate classification results of all patches in an image are used to determine the overall recognition outcome for the given image. The utility of the proposed scheme is aptly demonstrated by implementing it on two popular benchmark face recognition datasets, and comparing the effectiveness of the proposed approach vis-à-vis other statistical learning procedures in facial recognition and also several other methods developed so far. The effectiveness of the artificial bee colony optimization technique and its Levy-mutated variation in optimizing Volterra kernels is conclusively proven in this paper by significantly outperforming many popular contemporary algorithms.  相似文献   

19.
EWA splatting   总被引:4,自引:0,他引:4  
We present a framework for high quality splatting based on elliptical Gaussian kernels. To avoid aliasing artifacts, we introduce the concept of a resampling filter, combining a reconstruction kernel with a low-pass filter. Because of the similarity to Heckbert's (1989) EWA (elliptical weighted average) filter for texture mapping, we call our technique EWA splatting. Our framework allows us to derive EWA splat primitives for volume data and for point-sampled surface data. It provides high image quality without aliasing artifacts or excessive blurring for volume data and, additionally, features anisotropic texture filtering for point-sampled surfaces. It also handles nonspherical volume kernels efficiently; hence, it is suitable for regular, rectilinear, and irregular volume datasets. Moreover, our framework introduces a novel approach to compute the footprint function, facilitating efficient perspective projection of arbitrary elliptical kernels at very little additional cost. Finally, we show that EWA volume reconstruction kernels can be reduced to surface reconstruction kernels. This makes our splat primitive universal in rendering surface and volume data.  相似文献   

20.
In this article, we describe a module for the identification of brand logos from video data. A model for the visual appearance of each logo is generated from a small number of sample images using multidimensional histograms of scale-normalized chromatic Gaussian receptive fields. We compare several identification techniques based on multidimensional histograms. Each of the methods displays high recognition rates and can be used for logo identification. Our method for calculating scale-normalized Gaussian receptive fields has linear computational complexity and is thus well adapted to a real-time system. However, with the current generation of microprocessors we obtain at best only two images per second when processing a full PAL video stream. To accelerate the process, we propose an architecture that combines fast detection, reliable identification, and fast tracking for speedup. The resulting real-time system is evaluated using video streams from sports Formula 1 races and football.Published online: 6 October 2004James L. Crowley: This research is funded by the European Commissions IST project DETECT (IST-2001-32157).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号