首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.
In this paper, we use the multidimensional multiscale parser (MMP) algorithm, a recently developed universal lossy compression method, to compress data from electrocardiogram (ECG) signals. The MMP is based on approximate multiscale pattern matching , encoding segments of an input signal using expanded and contracted versions of patterns stored in a dictionary. The dictionary is updated using concatenated and displaced versions of previously encoded segments, therefore MMP builds its own dictionary while the input data is being encoded. The MMP can be easily adapted to compress signals of any number of dimensions, and has been successfully applied to compress two-dimensional (2-D) image data. The quasi-periodic nature of ECG signals makes them suitable for compression using recurrent patterns, like MMP does. However, in order for MMP to be able to efficiently compress ECG signals, several adaptations had to be performed, such as the use of a continuity criterion among segments and the adoption of a prune-join strategy for segmentation. The rate-distortion performance achieved was very good. We show simulation results were MMP performs as well as some of the best encoders in the literature, although at the expense of a high computational complexity.  相似文献   

2.
In this paper, we exploit a recently introduced coding algorithm called multidimensional multiscale parser (MMP) as an alternative to the traditional transform quantization-based methods. MMP uses approximate pattern matching with adaptive multiscale dictionaries that contain concatenations of scaled versions of previously encoded image blocks. We propose the use of predictive coding schemes that modify the source's probability distribution, in order to favour the efficiency of MMP's dictionary adaptation. Statistical conditioning is also used, allowing for an increased coding efficiency of the dictionaries' symbols. New dictionary design methods, that allow for an effective compromise between the introduction of new dictionary elements and the reduction of codebook redundancy, are also proposed. Experimental results validate the proposed techniques by showing consistent improvements in PSNR performance over the original MMP algorithm. When compared with state-of-the-art methods, like JPEG2000 and H.264/AVC, the proposed algorithm achieves relevant gains (up to 6 dB) for nonsmooth images and very competitive results for smooth images. These results strongly suggest that the new paradigm posed by MMP can be regarded as an alternative to the one traditionally used in image coding, for a wide range of image types.  相似文献   

3.
In this paper, we have proposed a novel progressive image transmission scheme. In the present method, the concept of the BTC-PF is used for faster decoding. Here, images are decomposed into a number of blocks based on smoothness criterion. The smooth blocks are encoded by block means and the others are by BTC-PF method. To encode a block by BTC-PF method, the codebook is organized like a full search progressive transmission tree which helps greatly in efficient progressive transmission. The present method provides good image quality at low bit-rate and faster decoding compared to other spatial domain progressive transmission methods. We extend this method for color images also. In color image coding, each color plane is encoded separately and then the encoded information of the planes are transmitted in interleaving manner to obtain color images right from the early stages.  相似文献   

4.
A conceptually simple coding method may be described as follows. The source sequence is parsed into fixed-length blocks and a list of these blocks is placed in a dictionary. In the lossless case, the dictionary is transmitted and each successive block is encoded by giving its dictionary location. In the lossy case, the smallest collection of blocks such that every member of the dictionary is within distortion δ of the collection is determined, this codebook is transmitted, and each successive block is encoded by giving the location of a member of the code within δ of it. We show that by optimizing on the block length, this method is universal, that is, for any ergodic process it achieves entropy in the limit in the lossless case and the rate-distortion function R(δ) in the lossy case  相似文献   

5.
由于WBCT压缩算法对于不同平滑度图像无差别滤波,使得对于较平滑的图像,其恢复图像效果要低于一般小波编码方法,为此,本文针对不同平滑度的图像,通过定义图像平滑度,对图像进行分类,利用多方向多尺度临界采样,进行不同程度的方向滤波。笔者利用变换后的小波系数的特点,采用SPIHT算法实现嵌入式编码,从而改进了WBCT压缩算法...  相似文献   

6.
In this paper, we introduce a novel technique for adaptive scalar quantization. Adaptivity is useful in applications, including image compression, where the statistics of the source are either not known a priori or will change over time. Our algorithm uses previously quantized samples to estimate the distribution of the source, and does not require that side information be sent in order to adapt to changing source statistics. Our quantization scheme is thus backward adaptive. We propose that an adaptive quantizer can be separated into two building blocks, namely, model estimation and quantizer design. The model estimation produces an estimate of the changing source probability density function, which is then used to redesign the quantizer using standard techniques. We introduce nonparametric estimation techniques that only assume smoothness of the input distribution. We discuss the various sources of error in our estimation and argue that, for a wide class of sources with a smooth probability density function (pdf), we provide a good approximation to a "universal" quantizer, with the approximation becoming better as the rate increases. We study the performance of our scheme and show how the loss due to adaptivity is minimal in typical scenarios. In particular, we provide examples and show how our technique can achieve signal-to-noise ratios within 0.05 dB of the optimal Lloyd-Max quantizer for a memoryless source, while achieving over 1.5 dB gain over a fixed quantizer for a bimodal source.  相似文献   

7.
Multiscale Poisson Intensity and Density Estimation   总被引:1,自引:0,他引:1  
The nonparametric Poisson intensity and density estimation methods studied in this paper offer near minimax convergence rates for broad classes of densities and intensities with arbitrary levels of smoothness. The methods and theory presented here share many of the desirable features associated with wavelet-based estimators: computational speed, spatial adaptivity, and the capability of detecting discontinuities and singularities with high resolution. Unlike traditional wavelet-based approaches, which impose an upper bound on the degree of smoothness to which they can adapt, the estimators studied here guarantee nonnegativity and do not require any a priori knowledge of the underlying signal's smoothness to guarantee near-optimal performance. At the heart of these methods lie multiscale decompositions based on free-knot, free-degree piecewise-polynomial functions and penalized likelihood estimation. The degrees as well as the locations of the polynomial pieces can be adapted to the observed data, resulting in near-minimax optimal convergence rates. For piecewise-analytic signals, in particular, the error of this estimator converges at nearly the parametric rate. These methods can be further refined in two dimensions, and it is demonstrated that platelet-based estimators in two dimensions exhibit similar near-optimal error convergence rates for images consisting of smooth surfaces separated by smooth boundaries.  相似文献   

8.
基于PCNN和BWT的图像融合算法   总被引:3,自引:0,他引:3  
提出了一种新型图像融合算法.首先,对待融合的两幅图像进行双正交小波分解得到两组多尺度图像;接着,取其中任意一组作为主PCNN的输入、另一组相应的图像作为从PCNN的输入,在每次迭代时,经并行PCNN点火后,得到一系列多尺度融合图像;然后,对它们进行双正交小波重构得到每次迭代的融合结果,并计算每次迭代结果的信息熵,取信息熵值最大的融合图像作为最终结果.大量的实验以及与其他融合算法的比较分析,表明了本文算法的有效性和优越性.  相似文献   

9.
为提升全色图像和多光谱图像的融合效果,该文提出基于优化字典学习的遥感图像融合方法。首先将经典图像库中的图像分块作为训练样本,对其进行K均值聚类,根据聚类结果适度裁减数量较多且相似度较高的图像块,减少训练样本个数。接着对裁减后的训练样本进行训练,得到通用性字典,并标记出相似字典原子和较少使用的字典原子。然后用与原稀疏模型差异最大的全色图像块规范化后替换相似字典原子和较少使用的字典原子,得到自适应字典。使用自适应字典对多光谱图像经IHS变换后获取的亮度分量和源全色图像进行稀疏表示,把每一个图像块稀疏系数中的模极大值系数分离,得到极大值稀疏系数,将剩下的稀疏系数称为剩余稀疏系数。针对极大值稀疏系数和剩余稀疏系数分别选择不同的融合规则进行融合,以保留更多的光谱信息和空间细节信息,最后进行IHS逆变换获得融合图像。实验结果表明,与传统方法相比所提方法得到的融合图像主观视觉效果较好,且客观评价指标更优。  相似文献   

10.
In this paper, we propose a new approach for block-based lossless image compression by defining a new semiparametric finite mixture model-based adaptive arithmetic coding. Conventional adaptive arithmetic encoders start encoding a sequence of symbols with a uniform distribution, and they update the frequency of each symbol by incrementing its count after it has been encoded. When encoding an image row by row or block by block, conventional adaptive arithmetic encoders provide the same compression results. In addition, images are normally non-stationary signals, which means that different areas in an image have different probability distributions, so conventional adaptive arithmetic encoders which provide probabilities for the whole image are not very efficient. In the proposed compression scheme, an image is divided into non-overlapping blocks of pixels, which are separately encoded with an appropriate statistical model. Hence, instead of starting to encode each block with a uniform distribution, we propose to start with a probability distribution which is modeled by a semiparametric mixture obtained from the distributions of its neighboring blocks. The semiparametric model parameters are estimated through maximum likelihood using the expectation–maximization algorithm in order to maximize the arithmetic coding efficiency. The results of comparative experiments show that we provide significant improvements over conventional adaptive arithmetic encoders and the state-of-the-art lossless image compression standards.  相似文献   

11.
为了更好地实现图像的去噪效果,提出了一种改进的基于K-SVD(Singular Value Decomposition)字典学习的图像去噪算法。首先,将输入的含噪信号进行K均值聚类分解,将得到的图像块进行稀疏贝叶斯学习和噪声的更新,当迭代到一定次数时继续使用正交匹配追踪(Orthogonal Matching Pursuit,OMP)算法对图像块进行稀疏编码,然后在完成稀疏编码的基础上通过奇异值分解来逐列更新字典,反复迭代至得到过完备字典以实现稀疏表示,最后对处理过的图像进行重构,得到去噪后的图像。实验结果表明,本文的改进算法相对于传统的K-SVD字典的图像去噪能够在保留图像边缘和细节信息的同时,更有效地去除图像中的噪声,具有更好的视觉效果。  相似文献   

12.
Although side-match vector quantisation (SMVQ) reduces the bit rate, the quality of image coding using SMVQ generally degenerates as the grey level transition across the boundaries of neighbouring blocks increases or decreases. The author proposes a smooth side-match weighted method to yield a state codebook according to the smoothness of the grey levels between neighbouring blocks. When a block is encoded, a corresponding weight is assigned to each neighbouring block to represent its relative importance. This smooth side-match weighted vector quantisation (SSMWVQ) achieves a higher PSNR than SMVQ at the same bit rate. Also, each block can be pre-encoded in an image, allowing each encoded block to use all neighbouring blocks to yield the state codebook in SSMWVQ, rather than using only two neighbouring blocks, as in SMVQ. Moreover, SSMWVQ selects many high-detail blocks as basic blocks to enhance the coding quality, and merges many low-detail blocks into a larger one to reduce further the bit rate. Experimental results reveal that SSMWVQ has a higher PSNR and lower bit rate than other methods.  相似文献   

13.
考虑图像的近似信息和细节信息,提出了推广的线性尺度自回归(GLSA)多尺度模型。首先利用该模型建立不同尺度图像间的映射关系;其次使用原始图像及其小波分解结果得到模型中的参数以确定这种映射关系;最后根据该映射关系由低分辨率图像估计高分辨率图像。将该模型用于人脸识别处理,通过比较被测试图像的模型参数和训练集图像的模型参数确定被测试图像的类别。实验结果表明,使用GLSA模型估计得到的图像更加接近目标图像,以该模型为基础的人脸识别系统对光照的鲁棒性较强。  相似文献   

14.
Reports progress in primitive-based image coding using nonorthogonal dyadic wavelets. A 3D isotropic wavelet is used to approximate the difference-of-Gaussians (D-o-G) operator. Convolution of the image with dilated versions of the wavelet produces three band-pass signals that approximate multiscale smoothed second derivatives. An additional convolution of the image with a Gaussian-shaped low-pass wavelet creates a fourth subband signal that preserves low-frequency information not described by the three band-pass signals. The authors show that the original image can be recovered from the watershed and watercourse lines of the three band-pass signals plus the lowpass subband signal. By thresholding the watershed/watercourse representation, subsampling the low-pass subband, and using edge post emphasis, the authors achieve data reduction with little loss of fidelity. Further compression of the watersheds and watercourses is achieved by chain coding their shapes and predictive coding their amplitudes prior to lossless arithmetic coding. Results are presented for grey-level test images at data rates between 0.1 and 0.3 b/pixel.  相似文献   

15.
余家林  孙季丰  李万益 《电子学报》2016,44(8):1899-1908
为了准确有效的重构多视角图像中的三维人体姿态,该文提出一种基于多核稀疏编码的人体姿态估计算法.首先,针对连续帧姿态估计的歧义问题,该文设计了一种用于表达多视角图像的HA-SIFT描述子,其中,人体局部拓扑、肢体相对位置及外观信息被同时编码;然后,在多核学习框架下建立同时考虑特征空间内在流形结构与姿态空间几何信息的目标函数,并在希尔伯特空间优化目标函数以更新稀疏编码、过完备字典与多核权值;最后,利用姿态字典原子的线性组合来估计对应未知输入的三维人体姿态.实验结果表明,与核稀疏编码、Laplace稀疏编码及Bayesian稀疏编码相比,文本方法具有更高的估计精度.  相似文献   

16.
在图像处理领域,基于稀疏表示理论的图像超分辨力算法、高低分辨力字典与稀疏编码之间的映射关系是其中的2个关键环节。由于丰富多样的图像类型,单一字典并不能很好地表示图像。而在稀疏编码之间的映射关系上,严格相等的约束关系也限制了图像重建的效果。针对上述两个方面,采用包容性更强的多个字典与约束条件更为宽松的全耦合稀疏关系进行图像的超分辨力重建。在图像非局部自相似性的基础上,进行多次自适应聚类;挑选出最优的聚类,通过全耦合稀疏学习的图像超分辨力算法,得到多个字典;最后,对输入的低分辨力图像进行分类重建,得到高分辨力图片。实验结果表明,在图像Leaves,Barbara,Room上,本文的聚类算法比原全耦合稀疏学习算法在峰值信噪比(PSNR)上分别提升了0.51 dB,0.21 dB,0.15 dB。  相似文献   

17.
Maximally smooth image recovery in transform coding   总被引:1,自引:0,他引:1  
The authors consider the reconstruction of images from partial coefficients in block transform coders and its application to packet loss recovery in image transmission over asynchronous transfer mode (ATM) networks. The proposed algorithm uses the smoothness property of common image signals and produces a maximally smooth image among all those with the same coefficients and boundary conditions. It recovers each damaged block by minimizing the intersample variation within the block and across the block boundary. The optimal solution is achievable through two linear transformations, where the transform matrices depend on the loss pattern and can be calculated in advance. The reconstruction of contiguously damaged blocks is accomplished iteratively using the previous solution as the boundary conditions in each new step. This technique is applicable to any unitary block-transform and is effective for recovering the DC and low-frequency coefficients. When applied to still image coders using the discrete cosine transform (DCT), high quality images are reconstructed in the absence of many DC and low-frequency coefficients over spatially adjacent blocks. When the damaged blocks are isolated by block interleaving, satisfactory results have been obtained even when all the coefficients are missing  相似文献   

18.
In this paper, we propose an effective method for quality assessment of screen content images (SCIs) based on multi-stage dictionary learning. To simulate the brain’s layered processing of signals, we proposed a hierarchical feature extraction strategy, which is called multi-stage dictionary learning, to simulate the hierarchical information processing of brain. First, the standard deviation of normalized map obtained from training image is used to select the training data in a certain proportion, which can ensure the learning efficiency and reduce the training burden. Next, the reconstructed map is weighted as the input of the next-stage dictionary learning. Then using the trained dictionary, the sparse representation is applied to extract features. Meanwhile, considering that some important features may be ignored in the process of multi-stage dictionary learning, we use Log Gabor filter to extract feature maps, and then calculate the correlation between feature maps as another kind of compensation features. Final, for the two feature sets, we choose SVR and feature codebook to learn two objective scores, and then use the adaptive weighting strategy to get the final objective quality score. Experimental results show that the proposed method is superior to several mainstream SCIs metrics on two publicly available databases.  相似文献   

19.
In this paper, we present a method for modeling a complex scene from a small set of input images taken from widely separated viewpoints and then synthesizing novel views. First, we find sparse correspondences across multiple input images and calibrate these input images taken with unknown cameras. Then one of the input images is chosen as the reference image for modeling by match propagation. A sparse set of reliably matched pixels in the reference image is initially selected and then propagated to neighboring pixels based on both the clustering-based light invariant photoconsistency constraint and the data-driven depth smoothness constraint, which are integrated into a pixel matching quality function to efficiently deal with occlusions, light changes and depth discontinuity. Finally, a novel view rendering algorithm is developed to fast synthesize a novel view by match propagation again. Experimental results show that the proposed method can produce good scene models from a small set of widely separated images and synthesize novel views in good quality.  相似文献   

20.
This paper treats a multiresolution hidden Markov model for classifying images. Each image is represented by feature vectors at several resolutions, which are statistically dependent as modeled by the underlying state process, a multiscale Markov mesh. Unknowns in the model are estimated by maximum likelihood, in particular by employing the expectation-maximization algorithm. An image is classified by finding the optimal set of states with maximum a posteriori probability. States are then mapped into classes. The multiresolution model enables multiscale information about context to be incorporated into classification. Suboptimal algorithms based on the model provide progressive classification that is much faster than the algorithm based on single-resolution hidden Markov models  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号