首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 22 毫秒
1.
The aim of this paper is to examine a set of wavelet functions (wavelets) for implementation in a still image compression system and to highlight the benefit of this transform relating to today's methods. The paper discusses important features of wavelet transform in compression of still images, including the extent to which the quality of image is degraded by the process of wavelet compression and decompression. Image quality is measured objectively, using peak signal-to-noise ratio or picture quality scale, and subjectively, using perceived image quality. The effects of different wavelet functions, image contents and compression ratios are assessed. A comparison with a discrete-cosine-transform-based compression system is given. Our results provide a good reference for application developers to choose a good wavelet compression system for their application  相似文献   

2.
A two-stage method for compressing bilevel images is described that is particularly effective for images containing repeated subimages, notably text. In the first stage, connected groups of pixels, corresponding approximately to individual characters, are extracted from the image. These are matched against an adaptively constructed library of patterns seen so far, and the resulting sequence of symbol identification numbers is coded and transmitted. From this information, along with the library itself and the offset from one mark to the next, an approximate image can be reconstructed. The result is a lossy method of compression that outperforms other schemes. The second stage employs the reconstructed image as an aid for encoding the original image using a statistical context-based compression technique. This yields a total bandwidth for exact transmission appreciably undercutting that required by other lossless binary image compression methods. Taken together, the lossy, and lossless methods provide an effective two-stage progressive transmission capability for textual images which has application for legal, medical, and historical purposes, and to archiving in general  相似文献   

3.
Lossless image compression is often performed through decorrelation, context modelling and entropy coding of the prediction error. This paper aims to identify the potential improvements to compression performance through improved decorrelation. Two adaptive prediction schemes are presented that aim to provide the highest possible decorrelation of the prediction error data. Consequently, complexity is overlooked and a high degree of adaptivity is sought. The adaptation of the respective predictor coefficients is based on training of the predictors in a local causal area adjacent to the pixel to be predicted. The causal nature of the training means no transmission overhead is required and also enables lossless coding of the images.The first scheme is an adaptive neural network, trained on the actual data being coded enabling continuous updates of the network weights. This results in a highly adaptive predictor, with localised optimisation based on stochastic gradient learning. Training for the second scheme is based on the recursive LMS (RLMS) algorithm incorporating feedback of the prediction error. In addition to the adaptive prediction, the results presented here also incorporate an arithmetic coding scheme, producing results which are better than CALIC.  相似文献   

4.
This paper presents a complete general-purpose method for still-image compression called adaptive prediction trees. Efficient lossy and lossless compression of photographs, graphics, textual, and mixed images is achieved by ordering the data in a multicomponent binary pyramid, applying an empirically optimized nonlinear predictor, exploiting structural redundancies between color components, then coding with hex-trees and adaptive runlength/Huffman coders. Color palettization and order statistics prefiltering are applied adaptively as appropriate. Over a diverse image test set, the method outperforms standard lossless and lossy alternatives. The competing lossy alternatives use block transforms and wavelets in well-studied configurations. A major result of this paper is that predictive coding is a viable and sometimes preferable alternative to these methods.  相似文献   

5.
ECG compression using long-term prediction   总被引:7,自引:0,他引:7  
A new algorithm for ECG signal compression is introduced. The compression system is based on the subautoregression (SAR) model, known also as the long-term prediction (LTP) model. The periodicity of the ECG signal is employed in order to further reduce redundancy, thus yielding high compression ratios. The suggested algorithm was evaluated using an in-house database. Very low bit rates on the order of 70 b/s are achieved with a relatively low reconstruction error (percent RMS difference-PRD) of less than 10%. The algorithm was compared, using the same database, with the conventional linear prediction (short-term prediction-STP) method, and was found superior at any bit rate. The suggested algorithm can be considered a generalization of the recently published average beat subtraction method  相似文献   

6.
In this paper, we propose multimodal biometric image watermarking scheme through a two-stage integrity verification method using the hidden thumbnail feature vectors for safe authentication of multimodal biometrics data, face and fingerprint, respectively. It is basically blind and spread spectrum-based robust watermarking method. The proposed method enables us to detect a tampered region by controlling watermark embedding strength to meet the requirement of predefined watermark extraction threshold. The key idea is that the thumbnail feature vectors of a face image as a watermark pattern are utilized by embedding into a fingerprint image in order to verify the integrity of respective biometric data. The first stage of integrity verification for a fingerprint image is done by deciding the validity of extracted thumbnail patterns. The second stage of integrity verification for a face image is done by one-to-one matching between the thumbnail feature vectors extracted from a face image and the thumbnail one of the received face image. Experimental results showed that the proposed method has a high detection rate of the forged biometric data and guarantees the security assurance.  相似文献   

7.
This paper presents a novel scheme for simultaneous compression and denoising of images: WISDOW-Comp (Wavelet based Image and Signal Denoising via Overlapping Waves—Compression). It is based on the atomic representation of wavelet details employed in WISDOW for image denoising. However, atoms can be also used for achieving compression. In particular, the core of WISDOW-Comp consists of recovering wavelet details, i.e. atoms, by exploiting wavelet low frequency information. Therefore, just the approximation band and significance map of atoms absolute maxima have to be encoded and sent to the decoder for recovering a cleaner as well as compressed version of the image under study. Experimental results show that WISDOW-Comp outperforms the state of the art of compression based denoisers in terms of both rate and distortion. Some technical devices will also be investigated for further improving its performances.  相似文献   

8.
9.
A rate-distortion model for describing the motion prediction efficiency in interframe wavelet video coding is proposed in this paper. Different from the non-scalable video coding, the scalable wavelet video coding needs to operate under multiple bitrate conditions and it has an open-loop structure. The conventional Lagrangian multiplier, which is widely used to solve the rate-distortion optimization problems in video coding, does not fit well into the scalable wavelet structure. In order to find the rate-distortion trade-off due to different bits allocated to motion and textual information, we suggest a motion information gain (MIG) metric to measure the motion prediction efficiency. Based on this metric, a new cost function for mode decision is proposed. Compared with the conventional Lagrangian method, our experiments show that the proposed method is less extraction-bitrate dependent and generally improves both the PSNR performance and the visual quality for the scalability cases.  相似文献   

10.
Improved image compression using S-tree and shading approach   总被引:1,自引:0,他引:1  
Distasi et al. (see ibid., vol.45, p.1095-1100, 1997), presented a storage-saving image compression method called the B-tree triangular coding (BTTC) method. Based on the modified S-tree data structure and the Gouraud shading method, this paper presents an improved image compression method railed the S-tree compression (STC) method. Experimental results illustrate that the bit rates and the image quality of the proposed STC method are quite competitive with the BTTC method, Due to the simple geometrical decomposition in the STC method, the ratio of the execution time or the proposed method over the BTTC method is less than 1/2  相似文献   

11.
随着成像光谱仪向着高光谱分辨率、高空间分辨率方向发展,高光谱图像的数据量呈几何级数增长。由于数据传输和存储能力的限制,必须对高光谱图像进行有效压缩。首先,对高光谱图像的相关性进行了深入分析,得知其具有一定的空间相关性和极强的谱间相关性,从而具有较强的可压缩性。其次,结合JPEG2000对DPCM进行了修改,提出了基于一阶线性预测与JPEG2000相结合的无损压缩方案。最后,在软件平台上实现了该方案,并取得了较好的压缩效果。结果表明,该方案可以有效的实现高光谱图像无损压缩,验证了方案的可行性,为硬件平台上实现该方案提供了理论依据。  相似文献   

12.
A two-stage adaptive vector quantization scheme for radiographic image sequence coding is introduced. Each frame in the sequence is first decomposed into a set of vectors, corresponding to nonoverlapping spatially contiguous block of pixels. A codebook is generated using a training set of vectors drawn from the sequence. Each vector is then encoded by the label representing the closest codeword of the codebook, and the label values in a frame label map memory at both ends of the communication channel. The changes occurring in the radiographic image sequences can be categorized into two types: those due to body motion and those due to the injected contrast dye material. In the second scheme proposed, encoding is performed in two stages. In the first stage, the labels of corresponding vectors from consecutive frames are compared and the frame label map memory is replenished (updated). This stage is sufficient to tack the changes caused by patient motions but not due to the injected contrast dye material. The resulting residual error vectors after the first stage coding are calculated for the latter changes and are further encoded by a second codebook, which is updated on a frame-to-frame basis.  相似文献   

13.
Ozkaramanli  H. Ozmen  B. 《Electronics letters》2003,39(22):1578-1579
Image compression performance of new multi-wavelets constructed using B-spline super functions is compared with existing multi-wavelets. First, orthogonal approximation order preserving pre-filters are designed and then an extensive comparative performance analysis in image compression is carried out. The results confirm the usefulness of the super function design criteria in image compression. The new multi-wavelets show excellent performance outperforming most of the well known multi-wavelets for a large number of still images at almost all compression ratios considered.  相似文献   

14.
Region-based fractal image compression using heuristic search   总被引:7,自引:0,他引:7  
Presents work carried out on fractal (or attractor) image compression. The approach relies on the assumption that image redundancy can be efficiently exploited through self-transformability. The algorithms described utilize a novel region-based partition of the image that greatly increases the compression ratios achieved over traditional block-based partitionings. Due to the large search spaces involved, heuristic algorithms are used to construct these region-based transformations. Results for three different heuristic algorithms are given. The results show that the region-based system achieves almost double the compression ratio of the simple block-based system at a similar decompressed image quality. For the Lena image, compression ratios of 41:1 can be achieved at a PSNR of 26.56 dB.  相似文献   

15.
Video frame memory compression has gained increased popularity in video processing ICs to save external memory storage size and reduce memory access bandwidth. This technique is especially important in portable devices where efficient use of energy is critical for the deployment of video applications. In this paper, we propose a low-complexity lossless image compression method that uses only a fraction of one line-buffer. The proposed method first employs integer wavelet transform (IWT), and then low-frequency coefficients prediction of each segment based on those from the segment in the line above, and last Golomb-Rice (GR) encoding to achieve low-cost and highly efficient compression. Simulation results demonstrate that the proposed method gives a compression ratio comparable with the existing state-of-the-art low-complexity methods while significantly lowering the internal memory cost and keeping the complexity low.  相似文献   

16.
Lossless image compression using ordered binary-decision diagrams   总被引:3,自引:0,他引:3  
A lossless compression algorithm for images based on ordered binary-decision diagrams (OBDDs) is presented. The algorithm finds an OBDD which represents the image exactly and then codes the OBDD efficiently. The results obtained show a great improvement with respect to a previous work  相似文献   

17.
Context-based modeling is an important step in high-performance lossless data compression. To effectively define and utilize contexts for natural images is, however, a difficult problem. This is primarily due to the huge number of contexts available in natural images, which typically results in higher modeling costs, leading to reduced compression efficiency. Motivated by the prediction by partial matching context model that has been very successful in text compression, we present prediction by partial approximate matching (PPAM), a method for compression and context modeling for images. Unlike the PPM modeling method that uses exact contexts, PPAM introduces the notion of approximate contexts. Thus, PPAM models the probability of the encoding symbol based on its previous contexts, whereby context occurrences are considered in an approximate manner. The proposed method has competitive compression performance when compared with other popular lossless image compression algorithms. It shows a particularly superior performance when compressing images that have common features, such as biomedical images.  相似文献   

18.
Test data has increased enormously owing to the rising on-chip complexity of integrated circuits. It further increases the test data transportation time and tester memory. The non-correlated test bits increase the issue of the test power. This paper presents a two-stage block merging based test data minimization scheme which reduces the test bits, test time and test power. A test data is partitioned into blocks of fixed sizes which are compressed using two-stage encoding technique. In stage one, successive blocks are merged to retain a representative block. In stage two, the retained pattern block is further encoding based on the existence of ten different subcases between the sub-block formed by splitting the retained pattern block into two halves. Non-compatible blocks are also split into two sub-blocks and tried for encoded using lesser bits. Decompression architecture to retrieve the original test data is presented. Simulation results obtained corresponding to different ISCAS′89 benchmarks circuits reflect its effectiveness in achieving better compression.  相似文献   

19.
The well-known low-complexity JPEG and the newer JPEG-XR systems are based on block-based transform and simple transform-domain coefficient prediction algorithms. Higher complexity image compression algorithms, obtainable from intra-frame coding tools of video coders H.264 or HEVC, are based on multiple block-based spatial-domain prediction modes and transforms. This paper explores an alternative low-complexity image compression approach based on a single spatial-domain prediction mode and transform, which are designed based on a global image model. In our experiments, the proposed single-mode approach uses an average 20.5 % lower bit-rate than a standard low-complexity single-mode image coder that uses only conventional DC spatial prediction and 2-D DCT. It also does not suffer from blocking effects at low bit-rates.  相似文献   

20.
An image can be decomposed into the structural component and the geometric textural component.Based on this idea,an efficient two-layered compressing algorithm is proposed,which uses 2nd generation bandelets and wavelets.First,an original image is decomposed into the structural component and the textural component,and then these two components are compressed using wavelets and 2nd generation bandelets respectively.Numerical tests show that the proposed method works better than the bandelets and JPEG2000 in some specific SAR scene.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号