首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Image-based relighting allows us to efficiently light a scene under complicated illumination conditions. However, the traditional cubemap based multi-resolution analysis unevenly samples the spherical surface, with a higher sampling rate near the face corners and a lower one near the face centers. The non-uniformity penalizes the efficiency of data representation. This paper presents a uniformly sampling multi-resolution analysis approach, namely the icosahedron spherical wavelets (ISW), for image-based relighting under time-varying distant environment. Since the proposed ISW approach provides a highly uniform sampling distribution over the spherical domain, we thus can efficiently handle high frequency variations locally in the illumination changes as well as reduce the number of wavelet coefficients needed in the renderings. Furthermore, visual artifacts are demonstrated to be better suppressed in the proposed ISW approach. Compared with the traditional cubemap based multi-resolution analysis approach, we show that our approach can effectively produce higher quality image sequences that are closer to the ground truth in terms of percentage square errors.  相似文献   

2.
In image-based relighting, a pixel is associated with a number of sampled radiance values. This paper presents a two-level compression method. In the first level, the plenoptic property of a pixel is approximated by a spherical radial basis function (SRBF) network. That means that the spherical plenoptic function of each pixel is represented by a number of SRBF weights. In the second level, we apply a wavelet-based method to compress these SRBF weights. To reduce the visual artifact due to quantization noise, we develop a constrained method for estimating the SRBF weights. Our proposed approach is superior to JPEG, JPEG2000, and MPEG. Compared with the spherical harmonics approach, our approach has a lower complexity, while the visual quality is comparable. The real-time rendering method for our SRBF representation is also discussed.  相似文献   

3.
This paper evaluates the compression performance and characteristics of two wavelet coding compression schemes of electrocardiogram (ECG) signals suitable for real-time telemedical applications. The two proposed methods, namely the optimal zonal wavelet coding (OZWC) method and the wavelet transform higher order statistics-based coding (WHOSC) method, are used to assess the ECG compression issues. The WHOSC method employs higher order statistics (HOS) and uses multirate processing with the autoregressive HOS model technique to provide increasing robustness to the coding scheme. The OZWC algorithm used is based on the optimal wavelet-based zonal coding method developed for the class of discrete “Lipschitizian” signals. Both methodologies were evaluated using the normalized rms error (NRMSE) and the average compression ratio (CR) and bits per sample criteria, applied on abnormal clinical ECG data samples selected from the MIT-BIH database and the Creighton University Cardiac Center database. Simulation results illustrate that both methods can contribute to and enhance the medical data compression performance suitable for a hybrid mobile telemedical system that integrates these algorithmic approaches for real-time ECG data transmission scenarios with high CRs and low NRMSE ratios, especially in low bandwidth mobile systems  相似文献   

4.
Compressing inconsistent data   总被引:1,自引:0,他引:1  
In a frequent practical situation one possesses inconsistent fragmentary data concerning some industrial process or natural phenomenon. It is an interesting and reasonable task to assess what the most concise way to store or transmit them would be. The authors consider the zero-error case of the problem, i.e., we would like to save all the data incorporating them into the most concise but necessarily alternative consistent data structures. More precisely, we want to find a set of alternatives which requires the minimum total storage place. From the mathematical viewpoint the model is information-theoretic and gives a common framework to deal with many combinatorial problems in the theory of extremal hypergraphs. From the practical viewpoint the interest of the mathematical theory is to produce new information measures capturing the inconsistency in the data  相似文献   

5.
肖笛  刘增力  虞贵财 《信息技术》2006,30(11):60-62
讲述了利用小波变换对视频图像进行压缩和解压的原理。并从原理和应用角度对基于小波变换原理的视频图像压缩与解压芯片ADV611进行了介绍。给出了关于ADV611芯片在图像压缩中的应用实例和效果图。  相似文献   

6.
To efficiently compress rasterized compound documents, an encoder must be content-adaptive. Content adaptivity may be achieved by employing a layered approach. In such an approach, a compound image is segmented into layers so that appropriate encoders can be used to compress these layers individually. A major factor in using standard encoders efficiently is to match the layers’ characteristics to those of the encoders by using data filling techniques to fill-in the initially sparse layers. In this work we present a review of methods dealing with data filling and propose also a sub-optimal non-linear projections scheme that efficiently matches the baseline JPEG coder in compressing background layers, leading to smaller files with better image quality.  相似文献   

7.
Inverse halftoning using wavelets   总被引:10,自引:0,他引:10  
This work introduces a new approach to inverse halftoning using nonorthogonal wavelets. The distinct features of this wavelet-based approach are: (1) edge information in the highpass wavelet images of a halftone image is extracted and used to assist inverse halftoning, (2) cross-scale correlations in the multiscale wavelet decomposition are used for removing background halftoning noise while preserving important edges in the wavelet lowpass image, and (3) experiments show that our simple wavelet-based approach outperforms the best results obtained from inverse halftoning methods published in the literature, which are iterative in nature.  相似文献   

8.
A method of compressing data volume for left ventricular cineangiograms is proposed. This method enables digital-optical disks to store the cineangiograms with a recording density equivalent to that of high-density television video tape recorders (HDTV VTRs) while preserving the quality of the original image. The data volume of the cineangiograms is compressed by using an approximating function and storing its coefficients. Each cineangiogram frame is first decomposed into three regions: the inner part, the inner wall, and the background. Each region is then compressed by means of a difference operation and an adaptive approximation with smooth functions. Performance was tested for 20 cases using actual cineangiograms. The specifications were verified for a spatial resolution of 1000 TV lines, a dynamic range of 60 dB, an SNR or 40 dB [p-p/rms], volume compression to 7% of the original volume, and 0.19 [second/frame] for decoding on a 1.7 MFLOPS computer  相似文献   

9.
In this letter we establish a wavelet model for video traffic. Different from the existing methods which model the video traffic in the time domain, we model the wavelet coefficients in the wavelet domain. The strength of the wavelet model includes: (1) an unified approach to model both the long-range and the short-range dependence in the video traffic simultaneously; (2) a computationally efficient method for developing the model and generating high quality video traffic; and (3) feasibility of performance analysis using the model  相似文献   

10.
In this paper, the use of nonseparable wavelets for tomographic reconstruction is investigated. Local tomography is also presented. The algorithm computes both the quincunx approximation and detail coefficients of a function from its projections. Simulation results showed that nonseparable wavelets provide a reconstruction improvement versus separable wavelets.  相似文献   

11.
Multiresolution tomographic reconstruction using wavelets   总被引:5,自引:0,他引:5  
Shows how the separable two-dimensional wavelet representation leads naturally to an efficient multiresolution tomographic reconstruction algorithm. This algorithm is similar to the conventional filtered backprojection algorithm, except that the filters are now angle dependent, and the backprojection gives the wavelet coefficients of the reconstruction, which are then used to synthesize the reconstruction at various resolution levels. By reconstructing only a small localized region at high resolution, the authors show how radiation exposure and computation can be significantly reduced, compared to a standard reconstruction.  相似文献   

12.
In broadband wireless communications, coded orthogonal frequency-division multiplexing (OFDM) can be used with multiple receive antennas to achieve both frequency diversity and space diversity. In this scenario, the optimal approach is subcarrier-based space combining. However, such an approach is quite complex, because multiple discrete Fourier transform (DFT) blocks, each per receive antenna, are used. We propose a pre-DFT processing scheme based upon eigenanalysis. In the proposed scheme, the received signals are weighted and combined both before and after the DFT processing. As a result, the required number of DFT blocks can be significantly reduced. With perfect weighting coefficients, the margin of the performance improvement decreases along with the increase of the number of DFT blocks, thus enabling effective performance and complexity tradeoff. To achieve a maximum average pairwise codeword distance, it will be shown that the maximum number of DFT blocks required is equal to the minimum of the number of receive antennas and the number of distinct paths in the channel. When the number of distinct paths is larger than the number of receive antennas and with a smaller number of DFT blocks, extensive simulation results will also show that near-optimal performance can still be achieved for most channels. Finally, in an OFDM system with differential modulation, we use a signal covariance matrix to obtain the weighting coefficients before the DFT processing. In this case, simulation results will demonstrate that the performance of the proposed scheme can be better than subcarrier-based space combining, but with much lower complexity.  相似文献   

13.
This paper presents a novel scheme for simultaneous compression and denoising of images: WISDOW-Comp (Wavelet based Image and Signal Denoising via Overlapping Waves—Compression). It is based on the atomic representation of wavelet details employed in WISDOW for image denoising. However, atoms can be also used for achieving compression. In particular, the core of WISDOW-Comp consists of recovering wavelet details, i.e. atoms, by exploiting wavelet low frequency information. Therefore, just the approximation band and significance map of atoms absolute maxima have to be encoded and sent to the decoder for recovering a cleaner as well as compressed version of the image under study. Experimental results show that WISDOW-Comp outperforms the state of the art of compression based denoisers in terms of both rate and distortion. Some technical devices will also be investigated for further improving its performances.  相似文献   

14.
An approach to watermarking digital images using non-regular wavelets is advanced. Non-regular transforms spread the energy in the transform domain. The proposed method leads at the same time to increased image quality and increased robustness with respect to lossy compression. The approach provides robust watermarking by suitably creating watermarked messages that have energy compaction and frequency spreading. Our experimental results show that the application of non-regular wavelets, instead of regular ones, can furnish a superior robust watermarking scheme. The generated watermarked data is more immune against non-intentional JPEG and JPEG2000 attacks.  相似文献   

15.
小波变换在去除探地雷达信号直达波的应用   总被引:11,自引:6,他引:11  
基于二维有向连续小波良好的方向选择性,对于探地雷达信号中的直达波进行了去除。通过对两个不同性质的埋地目标的处理,验证了这种方法的有效性。应用一维小波变换方法计算了探地雷达信号的瞬时参数,分析了几种瞬时参数夺于目标识别的作用,并且比较了直达波对瞬时参数识别目标的影响。  相似文献   

16.
For the first time a state variable transient analysis using wavelets is developed and implemented in a circuit simulator. The formulation is particularly well suited to modeling RF and microwave circuits and is validated by considering a nonlinear transmission line. However, results indicate that still more research is needed to make this method efficient for the simulation of large circuits  相似文献   

17.
In this article, we demonstrate a bandwidth efficient method of pulse amplitude modulation (PAM) signaling using a family of orthonormal wavelets as the baseband pulse. These wavelets can be transmitted using single sideband (SSB) transmission, since they have zero average value. We provide a comparison with raised-cosine signaling, with the wavelet approach offering 50% greater data rate at the same bandwidth  相似文献   

18.
A new methodology for the classification of images of marble surfaces is described according to visual quality parameters based on multiresolution decomposition with wavelets. Experimental results show that the three marble categories considered can be easily separated with the proposed method.  相似文献   

19.
Experiments with wavelets for compression of SAR data   总被引:3,自引:0,他引:3  
Wavelet transform coding is shown to be an effective method for compression of both detected and complex synthetic aperture radar (SAR) imagery. Three different orthogonal wavelet transform filters are examined for use in SAR data compression; the performances of the filters are correlated with mathematical properties such as regularity and number of vanishing moments. Excellent quality reconstructions are obtained at data rates as low as 0.25 bpp for detected data and as low as 0.5 bits per element (or 2 bpp) for complex data  相似文献   

20.
Segmentation of bright targets using wavelets and adaptivethresholding   总被引:7,自引:0,他引:7  
A general systematic method for the detection and segmentation of bright targets is developed. We use the term "bright target" to mean a connected, cohesive object which has an average intensity distribution above that of the rest of the image. We develop an analytic model for the segmentation of targets, which uses a novel multiresolution analysis in concert with a Bayes classifier to identify the possible target areas. A method is developed which adaptively chooses thresholds to segment targets from background, by using a multiscale analysis of the image probability density function (PDF). A performance analysis based on a Gaussian distribution model is used to show that the obtained adaptive threshold is often close to the Bayes threshold. The method has proven robust even when the image distribution is unknown. Examples are presented to demonstrate the efficiency of the technique on a variety of targets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号