首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
During the last decade, there has been an increasing interest in the design of very fast wavelet image encoders focused on specific applications like interactive real-time image and video systems, running on power-constrained devices such as digital cameras, mobile phones where coding delay and/or available computing resources (working memory and power processing) are critical for proper operation. In order to reduce complexity, most of these fast wavelet image encoders are non-(SNR)-embedded and as a consequence, precise rate control is not supported. In this work, we propose some simple rate control algorithms for these kind of encoders and we analyze their impact to determine if, despite their inclusion, the global encoder is still competitive with respect to popular embedded encoders like SPIHT and JPEG2000. In this study we focus on the non-embedded LTW encoder, showing that the increase in complexity due to the rate control algorithm inclusion, maintains LTW competitive with respect to SPIHT and JPEG2000 in terms of R/D performance, coding delay and memory consumption.  相似文献   

2.
曾勇 《电子科技》2011,24(7):122-125
提出了一种新的基于JPEG2000压缩算法的码率控制算法,使得JPEG2000标准的压缩编码效率得到较大范围的提升。它在渐进过程截断算法的基础上,结合逐层位平面截断算法,减少了冗余的编码量和算法复杂度,同时经过大量测试,该算法的PSNR值比JPEG2000标准压缩算法略低0.05~0.1 dB。  相似文献   

3.
We describe a procedure by which Joint Photographic Experts Group (JPEG) compression may be customized for gray-scale images that are to be compressed before they are scaled, halftoned, and printed. Our technique maintains 100% compatibility with the JPEG standard, and is applicable with all scaling and halftoning methods. The JPEG quantization table is designed using frequency-domain characteristics of the scaling and halftoning operations, as well as the frequency sensitivity of the human visual system. In addition, the Huffman tables are optimized for low-rate coding. Compression artifacts are significantly reduced because they are masked by the halftoning patterns, and pushed into frequency bands where the eye is less sensitive. We describe how the frequency-domain effects of scaling and halftoning may be measured, and how to account for those effects in an iterative design procedure for the JPEG quantization table. We also present experimental results suggesting that the customized JPEG encoder typically maintains "near visually lossless" image quality at rates below 0.5 b/pixel (with reference to the number of pixels in the original image) when it is used with bilinear interpolation and either error diffusion or ordered dithering. Based on these results, we believe that in terms of the achieved bit rate, the performance of our encoder is typically at least 20% better than that of a JPEG encoder using the suggested baseline tables.  相似文献   

4.
Image compression is indispensable in medical applications where inherently large volumes of digitized images are presented. JPEG 2000 has recently been proposed as a new image compression standard. The present recommendations on the choice of JPEG 2000 encoder options were based on nontask-based metrics of image quality applied to nonmedical images. We used the performance of a model observer [non-prewhitening matched filter with an eye filter (NPWE)] in a visual detection task of varying signals [signal known exactly but variable (SKEV)] in X-ray coronary angiograms to optimize JPEG 2000 encoder options through a genetic algorithm procedure. We also obtained the performance of other model observers (Hotelling, Laguerre-Gauss Hotelling, channelized-Hotelling) and human observers to evaluate the validity of the NPWE optimized JPEG 2000 encoder settings. Compared to the default JPEG 2000 encoder settings, the NPWE-optimized encoder settings improved the detection performance of humans and the other three model observers for an SKEV task. In addition, the performance also was improved for a more clinically realistic task where the signal varied from image to image but was not known a priori to observers [signal known statistically (SKS)]. The highest performance improvement for humans was at a high compression ratio (e.g., 30:1) which resulted in approximately a 75% improvement for both the SKEV and SKS tasks.  相似文献   

5.
针对联合图像专家组(JPEG)标准设计了一种基于自适应下采样和超分辨力重建的图像压缩编码框架。在编码器端,为待编码的原始图像设计了多种不同的下采样模式和量化模式,通过率失真优化算法从多种模式中选择最优的下采样模式(DSM)和量化模式(QM),最后待编码图像将在选择的模式下进行下采样和JPEG编码;在解码器端,采用基于卷积神经网络的超分辨力重建算法对解码后的下采样图像进行重建。此外,所提出的框架扩展到JPEG2000压缩标准下同样有效可行。仿真实验结果表明,相比于主流的编解码标准和先进的编解码方法,提出的框架能有效地提升编码图像的率失真性能,并能获得更好的视觉效果。  相似文献   

6.
Arguably, the most important and defining feature of the JPEG2000 image compression standard is its R-D optimized code stream of multiple progressive layers. This code stream is an interleaving of many scalable code streams of different sample blocks. In this paper, we reexamine the R-D optimality of JPEG2000 scalable code streams under an expected multirate distortion measure (EMRD), which is defined to be the average distortion weighted by a probability distribution of operational rates in a given range, rather than for one or few fixed rates. We prove that the JPEG2000 code stream constructed by embedded block coding of optimal truncation is almost optimal in the EMRD sense for uniform rate distribution function, even if the individual scalable code streams have nonconvex operational R-D curves. We also develop algorithms to optimize the JPEG2000 code stream for exponential and Laplacian rate distribution functions while maintaining compatibility with the JPEG2000 standard. Both of our analytical and experimental results lend strong support to JPEG2000 as a near-optimal scalable image codec in a fairly general setting.  相似文献   

7.
Wireless multimedia sensor networks (WMSNs) have been potentially applicable for several emerging applications. The resources, i.e., power and bandwidth available to visual sensors in a WMSN are, however, very limited. Hence, it is important but challenging to achieve efficient resource allocation and optimal video data compression while maximizing the overall network lifetime. In this paper, a power-rate-distortion (PRD) optimized resource-scalable low-complexity multiview video encoding scheme is proposed. In our video encoder, both the temporal and interview information can be exploited based on the comparisons of extracted media hashes without performing motion and disparity estimations, which are known to be time-consuming. We present a PRD model to characterize the relationship between the available resources and the RD performance of our encoder. More specifically, an RD function in terms of the percentages for different coding modes of blocks and the target bit rate under the available resource constraints is derived for optimal coding mode decision. The major goal here is to design a PRD model to optimize a “motion estimation-free” low-complexity video encoder for applications with resource-limited devices, instead of designing a general-purpose video codec to compete compression performance against current compression standards (e.g., H.264/AVC). Analytic results verify the accuracy of our PRD model, which can provide a theoretical guideline for performance optimization under limited resource constraints. Simulation results on joint RD performance and power consumption (measured in terms of encoding time) demonstrate the applicability of our video coding scheme for WMSNs.  相似文献   

8.
《Electronics letters》1999,35(18):1515-1516
It is shown that the compression achieved by a two-stage lossless data compression scheme can be improved significantly by choosing an efficient encoder for the second stage. The investigation is carried out by considering both classical and neural network predictors. The performance of each of these predictors in conjunction with different encoders is evaluated in terms of the compression ratio by using test telemetry data files, and from the results obtained the best two-stage compression schemes are identified  相似文献   

9.
阐述了光电编码器的工作原理,按照编码方式的不同将光电编码器分为两类,总结比较两类编码器的性能特点及各自的优缺点。针对编码器检测技术的研究现状,介绍了常见编码器检测方法及模型,以及国内外现有的光电编码器检测技术及装置,并对比、分析、统计各检测方法的特点、适用场合及不足之处。并概括总结了未来光电编码器检测技术高精度、高效、高适用性的发展趋势。  相似文献   

10.
Differentiation applied to lossless compression of medical images   总被引:1,自引:0,他引:1  
Lossless compression of medical images using a proposed differentiation technique is explored. This scheme is based on computing weighted differences between neighboring pixel values. The performance of the proposed approach, for the lossless compression of magnetic resonance (MR) images and ultrasonic images, is evaluated and compared with the lossless linear predictor and the lossless Joint Photographic Experts Group (JPEG) standard. The residue sequence of these techniques is coded using arithmetic coding. The proposed scheme yields compression measures, in terms of bits per pixel, that are comparable with or lower than those obtained using the linear predictor and the lossless JPEG standard, respectively, with 8-b medical images. The advantages of the differentiation technique presented here over the linear predictor are: 1) the coefficients of the differentiator are known by the encoder and the decoder, which eliminates the need to compute or encode these coefficients, and 21 the computational complexity is greatly reduced. These advantages are particularly attractive in real time processing for compressing and decompressing medical images.  相似文献   

11.
We investigate the joint source-channel coding problem of transmitting nonuniform memoryless sources over binary phase-shift keying-modulated additive white Gaussian noise and Rayleigh fading channels via turbo codes. In contrast to previous work, recursive nonsystematic convolutional encoders are proposed as the constituent encoders for heavily biased sources. We prove that under certain conditions, and when the length of the input source sequence tends to infinity, the encoder state distribution and the marginal output distribution of each constituent recursive convolutional encoder become asymptotically uniform, regardless of the degree of source nonuniformity. We also give a conjecture (which is empirically validated) on the condition for the higher order distribution of the encoder output to be asymptotically uniform, irrespective of the source distribution. Consequently, these conditions serve as design criteria for the choice of good encoder structures. As a result, the outputs of our selected nonsystematic turbo codes are suitably matched to the channel input, since a uniformly distributed input maximizes the channel mutual information, and hence, achieves capacity. Simulation results show substantial gains by the nonsystematic codes over previously designed systematic turbo codes; furthermore, their performance is within 0.74-1.17 dB from the Shannon limit. Finally, we compare our joint source-channel coding system with two tandem schemes which employ a fourth-order Huffman code (performing near-optimal data compression) and a turbo code that either gives excellent waterfall bit-error rate (BER) performance or good error-floor performance. At the same overall transmission rate, our system offers robust and superior performance at low BERs (< 10/sup -4/), while its complexity is lower.  相似文献   

12.
This paper presents a modified JPEG coder that is applied to the compression of mixed documents (containing text, natural images, and graphics) for printing purposes. The modified JPEG coder proposed in this paper takes advantage of the distinct perceptually significant regions in these documents to achieve higher perceptual quality than the standard JPEG coder. The region-adaptivity is performed via classified thresholding being totally compliant with the baseline standard. A computationally efficient classification algorithm is presented, and the improved performance of the classified JPEG coder is verified.  相似文献   

13.
This paper presents a modified JPEG coder that is applied to the compression of mixed documents (containing text, natural images, and graphics) for printing purposes. The modified JPEG coder proposed in this paper takes advantage of the distinct perceptually significant regions in these documents to achieve higher perceptual quality than the standard JPEG coder. The region-adaptivity is performed via classified thresholding being totally compliant with the baseline standard. A computationally efficient classification algorithm is presented, and the improved performance of the classified JPEG coder is verified.  相似文献   

14.
To maximize rate distortion performance while remaining faithful to the JPEG syntax, the joint optimization of the Huffman tables, quantization step sizes, and DCT indices of a JPEG encoder is investigated. Given Huffman tables and quantization step sizes, an efficient graph-based algorithm is first proposed to find the optimal DCT indices in the form of run-size pairs. Based on this graph-based algorithm, an iterative algorithm is then presented to jointly optimize run-length coding, Huffman coding, and quantization table selection. The proposed iterative algorithm not only results in a compressed bitstream completely compatible with existing JPEG and MPEG decoders, but is also computationally efficient. Furthermore, when tested over standard test images, it achieves the best JPEG compression results, to the extent that its own JPEG compression performance even exceeds the quoted PSNR results of some state-of-the-art wavelet-based image coders such as Shapiro's embedded zerotree wavelet algorithm at the common bit rates under comparison. Both the graph-based algorithm and the iterative algorithm can be applied to application areas such as web image acceleration, digital camera image compression, MPEG frame optimization, and transcoding, etc.   相似文献   

15.
To efficiently compress rasterized compound documents, an encoder must be content-adaptive. Content adaptivity may be achieved by employing a layered approach. In such an approach, a compound image is segmented into layers so that appropriate encoders can be used to compress these layers individually. A major factor in using standard encoders efficiently is to match the layers’ characteristics to those of the encoders by using data filling techniques to fill-in the initially sparse layers. In this work we present a review of methods dealing with data filling and propose also a sub-optimal non-linear projections scheme that efficiently matches the baseline JPEG coder in compressing background layers, leading to smaller files with better image quality.  相似文献   

16.
Previous studies have evaluated the effect of the new still image compression standard JPEG 2000 using nontask based image quality metrics, i.e., peak-signal-to-noise-ratio (PSNR) for nonmedical images. In this paper, the effect of JPEG 2000 encoder options was investigated using the performance of human and model observers (nonprewhitening matched filter with an eye filter, square-window Hotelling, Laguerre-Gauss Hotelling and channelized Hotelling model observer) for clinically relevant visual tasks. Two tasks were investigated: the signal known exactly but variable task (SKEV) and the signal known statistically task (SKS). Test images consisted of real X-ray coronary angiograms with simulated filling defects (signals) inserted in one of the four simulated arteries. The signals varied in size and shape. Experimental results indicated that the dependence of task performance on the JPEG 2000 encoder options was similar for all model and human observers. Model observer performance in the more tractable and computationally economic SKEV task can be used to reliably estimate performance in the complex but clinically more realistic SKS task. JPEG 2000 encoder settings different from the default ones resulted in greatly improved model and human observer performance in the studied clinically relevant visual tasks using real angiography backgrounds.  相似文献   

17.
In most recording channels, modulation codes are employed to transform user data to sequences that satisfy some desirable constraint. Run-length-limited (RLL(d,k)) and maximum transition run (MTR(j,k)) systems are examples of constraints that improve timing and detection performance. A modulation encoder typically takes the form of a finite-state machine. Alternatively, a look-ahead encoder can be used instead of a finite-state encoder to reduce complexity. Its encoding process involves a delay called look-ahead. If the input labeling of a look-ahead encoder allows block decodability, the encoder is called a bounded-delay-encodable block-decodable (BDB) encoder. These classes of encoders can be viewed as generalizations of the well-known deterministic and block-decodable encoders. Other related classes are finite-anticipation and sliding-block decodable encoders. In this paper, we clarify the relationship among these encoders. We also discuss the characterization of look-ahead and BDB encoders using the concept of path-classes. To minimize encoder complexity, look-ahead is desired to be small. We show that for nonreturn to zero inverted (NRZI) versions of RLL|,(0,k),RLL(1,k), and RLL(d,infin), a BDB encoder does not yield a higher rate than an optimal block-decodable encoder. However, for RLL(d,k) such that dges4 and d+2lesk相似文献   

18.
In this letter, an error-robust and JPEG compliant progressive image compression scheme over wireless channels is presented. The use of restart markers in the JPEG standard provides the resynchronization function for error handling. Unfortunately, misinterpreted markers may cause serious error damage due to the error propagation. Therefore, a restart marker regulation technique is proposed here to preprocess restart markers at the decoding end. All erroneous restart markers are corrected and rearranged in the correct order. After decoding, isolated erroneous restart intervals are detected and further processed by the error concealment to reduce image degradation. The simulations demonstrate that the proposed scheme does significantly improve the image quality in error-prone environments  相似文献   

19.
Convolutional codes I: Algebraic structure   总被引:3,自引:0,他引:3  
A convolutional encoder is defined as any constant linear sequential circuit. The associated code is the set of all output sequences resulting from any set of input sequences beginning at any time. Encoders are called equivalent if they generate the same code. The invariant factor theorem is used to determine when a convolutional encoder has a feedback-free inverse, and the minimum delay of any inverse. All encoders are shown to be equivalent to minimal encoders, which are feedback-free encoders with feedback-free delay-free inverses, and which can be realized in the conventional manner with as few memory elements as any equivalent encoder, Minimal encoders are shown to be immune to catastrophic error propagation and, in fact, to lead in a certain sense to the shortest decoded error sequences possible per error event. In two appendices, we introduce dual codes and syndromes, and show that a minimal encoder for a dual code has exactly the complexity of the original encoder; we show that systematic encoders with feedback form a canonical class, and compare this class to the minimal class.  相似文献   

20.
Binned progressive quantization for compressive sensing   总被引:1,自引:0,他引:1  
Compressive sensing (CS) has been recently and enthusiastically promoted as a joint sampling and compression approach. The advantages of CS over conventional signal compression techniques are architectural: the CS encoder is made signal independent and computationally inexpensive by shifting the bulk of system complexity to the decoder. While these properties of CS allow signal acquisition and communication in some severely resource-deprived conditions that render conventional sampling and coding impossible, they are accompanied by rather disappointing rate-distortion performance. In this paper, we propose a novel coding technique that rectifies, to a certain extent, the problem of poor compression performance of CS and, at the same time, maintains the simplicity and universality of the current CS encoder design. The main innovation is a scheme of progressive fixed-rate scalar quantization with binning that enables the CS decoder to exploit hidden correlations between CS measurements, which was overlooked in the existing literature. Experimental results are presented to demonstrate the efficacy of the new CS coding technique. Encouragingly, on some test images, the new CS technique matches or even slightly outperforms JPEG.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号