We present a compression scheme that is useful for interactive video applications such as browsing a multimedia database.
The focus of our approach the development of a compression scheme (and a corresponding retrieval scheme) that is optimal for
any data rate. To browse a multimedia database, such a compression scheme is essential. We use a multiresolution setting,
but eliminate the need for wavelets. This results in much better compression. We show experimental results and explain in
detail how to extend our approach to multidimensional data. 相似文献
A general and flexible framework for the wavelet‐based decompositions of stationary time series in discrete time, called adaptive wavelet decompositions (AWDs), is introduced. It is shown that several particular AWDs can be constructed with the aim of providing decomposition (approximation and detail) coefficients that exhibit certain nice statistical properties, where the latter can be chosen based on a range of theoretical or applied considerations. AWDs make use of a Fast Wavelet Transform‐like algorithm whose filters ‐ in contrast with their counterparts in Orthogonal Wavelet Decompositions (OWDs) – may depend on the scale. As with OWDs, this algorithm has good properties such as computational efficiency and invariance to polynomial trends. A property whose pursuit plays a central role in this work is the decorrelation of the detail coefficients. For many time series models (e.g., FARIMA(0,δ,0)), the AWD filters can be defined so that the resulting AWD detail coefficients are all (exactly) decorrelated. The corresponding AWDs, called Exact AWDs (EAWDs), are particularly useful in simulation of Gaussian stationary time series, if the associated filters have a fast decay. The proposed simulation methods generalize and improve upon existing wavelet‐based ones. AWDs for which the detail coefficients are not exactly decorrelated, but still more decorrelated than those of OWDs, are referred to as approximate AWDs (AAWDs). They can be obtained by truncating EAWD filters, or by adopting some of the existing approaches to modeling the dependence structure of the OWD detail coefficients (e.g., Craigmile et al., 2005 ). AAWDs naturally lead to new wavelet‐based Maximum Likelihood estimators. The performance of these estimators is investigated through simulations and from some theoretical standpoints. The focus in estimation is also on Gaussian stationary series, though the method is expected to work for non‐Gaussian stationary series as well. 相似文献
An efficient algorithm for image segmentation based on a multi-resolution application of a wavelets transform and feature distribution is presented. The original feature space is transformed into a lower resolution with a wavelets transform to derive fast computation of the optimum threshold value in a feature space. Based on this lower resolution version of the given feature space, a single feature value or multiple feature values are determined as the optimum threshold values. The optimum feature values, which are in the lower resolution, are projected onto the original feature space. In this step a refinement procedure may be added to detect the optimum threshold value. Experimental results for the proposed algorithm indicate feasibility and reliability for fast image segmentation. 相似文献
The display of image fusion is well accepted as a powerful tool in visual image analysis and comparison. In clinical practice, this is a mandatory step when studying images from a dual PET/CT scanner. However, the display methods that are implemented on most workstations simply show both images side by side, in separate and synchronized windows. Sometimes images are presented superimposed in a single window, preventing the user from doing quantitative analysis. In this article a new image fusion scheme is presented, allowing performing quantitative analysis directly on the fused images.
Methods
The objective is to preserve the functional information provided by PET while incorporating details of higher resolution from the CT image. The process relies on a discrete wavelet-based image merging: both images are decomposed into successive details layers by using the “à trous” transform. This algorithm performs wavelet decomposition of images and provides coarser and coarser spatial resolution versions of them. The high-spatial frequencies of the CT, or details, can be easily obtained at any level of resolution. A simple model is then inferred to compute the lacking details of the PET scan from the high frequency detail layers of the CT. These details are then incorporated in the PET image on a voxel-to-voxel basis, giving the fused PET/CT image.
Results
Aside from the expected visual enhancement, quantitative comparison of initial PET and CT images with fused images was performed in 12 patients. The obtained results were in accordance with the objectives of the study, in the sense that the organs’ mean intensity in PET was preserved in the fused image.
Conclusion
This alternative approach to PET/CT fusion display should be of interest for people interested in a more quantitative aspect of image fusion. The proposed method is actually complementary to more classical visualization tools. 相似文献
In this paper, we study denoising of multicomponent images. The presented procedures are spatial wavelet-based denoising techniques, based on Bayesian least-squares optimization procedures, using prior models for the wavelet coefficients that account for the correlations between the spectral bands. We analyze three mixture priors: Gaussian scale mixture models, Bernoulli-Gaussian mixture models and Laplacian mixture models. These three prior models are studied within the same framework of least-squares optimization. The presented procedures are compared to Gaussian prior model and single-band denoising procedures. We analyze the suppression of non-correlated as well as correlated white Gaussian noise on multispectral and hyperspectral remote sensing data and Rician distributed noise on multiple images of within-modality magnetic resonance data. It is shown that a superior denoising performance is obtained when (a) the interband covariances are fully accounted for and (b) prior models are used that better approximate the marginal distributions of the wavelet coefficients. 相似文献
An adaptive control algorithm with a neural network model, previously proposed in the literature for the control of mechanical manipulators, is applied to a CSTR (Continuous Stirred Tank Reactor). The neural network model uses either radial Gaussian or “Mexican hat” wavelets as basis functions. This work shows that the addition of linear functions to the networks significantly improves the error convergence when the CSTR is operated for long periods of time in a neighborhood of one operating point, a common scenario in chemical process control. Then, a quantitative comparative study based on output errors and control efforts is conducted where adaptive controllers using wavelets or Gaussian basis functions and PID controllers (IMC tuning with fixed parameters and self tuning PID) are compared. From this comparative study, the practicality and advantages of the adaptive controllers over fixed or adaptive PID control is assessed. 相似文献
In this paper, we propose a framework for defining feature extraction techniques, called Pixel Clustering. It is an extension of feature extraction with Wavelets. We propose two linear feature extraction techniques using Pixel Clustering: IntensityPatches and RegionPatches. We assess the methods in color and grayscale image datasets: two face datasets and two object datasets. The proposed methods present a short computation time for feature extraction and high accuracy compared with linear feature extraction methods and other state-of-the-art feature extraction techniques. 相似文献
In this article, various notions of edges encountered in digital image processing are reviewed in terms of compact representation (or completion). We show that critical exponents defined in Statistical Physics lead to a much more coherent definition of edges, consistent across the scales in acquisitions of natural phenomena, such as high resolution natural images or turbulent acquisitions. Edges belong to the multiscale hierarchy of an underlying dynamics, they are understood from a statistical perspective well adapted to fit the case of natural images. Numerical computation methods for the evaluation of critical exponents in the non-ergodic case are recalled, which apply for the vast majority of natural images. We study the framework of reconstructible systems in a microcanonical formulation, show how it redefines edge completion, and how it can be used to evaluate and assess quantitatively the adequation of edges as candidates for compact representations. We study with particular attention the case of turbulent data, in which edges in the classical sense are particularly challenged. Tests are conducted and evaluated on a standard database for natural images. We test the newly introduced compact representation as an ideal candidate for evaluating turbulent cascading properties of complex images, and we show better reconstruction performance than the classical tested methods. 相似文献