首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.
This paper presents a new three-dimensional (3-D) wavelet-based scalable lossless coding scheme for compression of volumetric medical images. Aiming to improve the productivity of radiologists and the cost-effectiveness of the system, we strive to achieve high decoder throughput, random access to coded data volume, progressive transmission, and high compression ratio in a balanced design approach. These desirable functionalities are realized by a modified 3-D dyadic wavelet transform tailored to volumetric medical images and an optimized Rice code of very low complexity.  相似文献   

2.
We study lossy-to-lossless compression of medical volumetric data using three-dimensional (3-D) integer wavelet transforms. To achieve good lossy coding performance, it is important to have transforms that are unitary. In addition to the lifting approach, we first introduce a general 3-D integer wavelet packet transform structure that allows implicit bit shifting of wavelet coefficients to approximate a 3-D unitary transformation. We then focus on context modeling for efficient arithmetic coding of wavelet coefficients. Two state-of-the-art 3-D wavelet video coding techniques, namely, 3-D set partitioning in hierarchical trees (Kim et al., 2000) and 3-D embedded subband coding with optimal truncation (Xu et al., 2001), are modified and applied to compression of medical volumetric data, achieving the best performance published so far in the literature-both in terms of lossy and lossless compression.  相似文献   

3.
Lossless compression of video using temporal information   总被引:1,自引:0,他引:1  
We consider the problem of lossless compression of video by taking into account temporal information. Video lossless compression is an interesting possibility in the line of production and contribution. We propose a compression technique which is based on motion compensation, optimal three-dimensional (3-D) linear prediction and context based Golomb-Rice (1966, 1979) entropy coding. The proposed technique is compared with 3-D extensions of the JPEG-LS standard for still image compression. A compression gain of about 0.8 bit/pel with respect to static JPEG-LS, applied on a frame-by-frame basis, is achievable at a reasonable computational complexity.  相似文献   

4.
Three-dimensional encoding/two-dimensional decoding of medical data   总被引:3,自引:0,他引:3  
We propose a fully three-dimensional (3-D) wavelet-based coding system featuring 3-D encoding/two-dimensional (2-D) decoding functionalities. A fully 3-D transform is combined with context adaptive arithmetic coding; 2-D decoding is enabled by encoding every 2-D subband image independently. The system allows a finely graded up to lossless quality scalability on any 2-D image of the dataset. Fast access to 2-D images is obtained by decoding only the corresponding information thus avoiding the reconstruction of the entire volume. The performance has been evaluated on a set of volumetric data and compared to that provided by other 3-D as well as 2-D coding systems. Results show a substantial improvement in coding efficiency (up to 33%) on volumes featuring good correlation properties along the z axis. Even though we did not address the complexity issue, we expect a decoding time of the order of one second/image after optimization. In summary, the proposed 3-D/2-D multidimensional layered zero coding system provides the improvement in compression efficiency attainable with 3-D systems without sacrificing the effectiveness in accessing the single images characteristic of 2-D ones.  相似文献   

5.
Modern video coding applications require data transmission over variable-bandwidth wired and wireless network channels to a variety of terminals, possibly having different screen resolutions and available computing power. Scalable video coding technology is needed to optimally support these applications. Recently proposed wavelet-based video codecs employing spatial-domain motion-compensated temporal filtering (SDMCTF) provide quality, resolution and frame-rate scalability while delivering compression performance comparable to that of H.264, the state-of-the-art in single-layer video coding. These codecs require quality-scalable coding of the motion vectors to support a large range of bit-rates with optimal compression efficiency. In this paper, the practical use of prediction-based scalable motion-vector coding in the context of scalable SDMCTF-based video coding is investigated. Extensive experimental results demonstrate that, irrespective of the employed motion model, our prediction-based scalable motion-vector codec (MVC) systematically outperforms state-of-the-art wavelet-based solutions for both lossy and lossless compression. A new rate-distortion optimized rate-allocation strategy is proposed, capable of optimally distributing the available bit-budget between the different frames and between the texture and motion information, making the integration of the scalable MVC into a scalable video codec possible. This rate-allocation scheme systematically outperforms heuristic approaches previously employed in the literature. Experiments confirm that by using a scalable MVC, lower bit-rates can be attained without sacrificing motion-estimation efficiency and that the overall coding performance at low rates is significantly improved by a better distribution of the available rate between texture and motion information. The only downside of scalable motion-vector coding is a slight performance loss incurred at high bit-rates.  相似文献   

6.
The authors propose a highly scalable image compression scheme based on the set partitioning in hierarchical trees (SPIHT) algorithm. The proposed algorithm, called highly scalable SPIHT (HS-SPIHT), adds the spatial scalability feature to the SPIHT algorithm through the introduction of multiple resolution-dependent lists and a resolution-dependent sorting pass. It keeps the important features of the original SPIHT algorithm such as compression efficiency, full SNR scalability and low complexity. The flexible bitstream of the HS-SPIHT encoder can easily be adapted to various resolution requirements at any bit rate. The parsing process can be carried out on-the-fly without decoding the bitstream by a simple parser (transcoder) that forms a part of a smart network. The HS-SPIHT algorithm is further developed for fully scalable coding of arbitrarily shaped visual objects. The proposed highly scalable algorithm finds applications in progressive web browsing, visual databases and especially in image transmission over heterogeneous networks.  相似文献   

7.
Region-of-interest image coding based on EBCOT   总被引:4,自引:0,他引:4  
In this paper, an efficient, region-of-interest (ROI) coding scheme achieved by modifying the implicit ROI encoding method is proposed. This new method reduces the priority of background coefficients in the ROI code-block without compromising algorithm complexity. It is suitable for applications in which it may be desirable to encode the ROI to a higher quality level than the background. In addition, several EBCOT-based ROI coding schemes for image compression are discussed. Experimental results demonstrate that the proposed ROI coding scheme improves the compression efficiency and combines the advantages of the implicit ROI encoding method (low complexity) and the maxshift method (good ROI rate distortion performance).  相似文献   

8.
An image coding algorithm, Progressive Resolution Coding (PROGRES), for a high-speed resolution scalable decoding is proposed. The algorithm is designed based on a prediction of the decaying dynamic ranges of wavelet subbands. Most interestingly, because of the syntactic relationship between two coders, the proposed method costs an amount of bits very similar to that used by uncoded (i.e., not entropy coded) SPIHT. The algorithm bypasses bit-plane coding and complicated list processing of SPIHT in order to obtain a considerable speed improvement, giving up quality scalability, but without compromising coding efficiency. Since each tree of coefficients is separately coded, where the root of the tree corresponds to the coefficient in LL subband, the algorithm is easily extensible to random access decoding. The algorithm is designed and implemented for both 2-D and 3-D wavelet subbands. Experiments show that the decoding speeds of the proposed coding model are four times and nine times faster than uncoded 2-D-SPIHT and 3-D-SPIHT, respectively, with almost the same decoded quality. The higher decoding speed gain in a larger image source validates the suitability of the proposed method to very large scale image encoding and decoding. In the Appendix, we explain the syntactic relationship of the proposed PROGRES method to uncoded SPIHT, and demonstrate that, in the lossless case, the bits sent to the codestream for each algorithm are identical, except that they are sent in different order.  相似文献   

9.
We propose a new framework for highly scalable video compression, using a lifting-based invertible motion adaptive transform (LIMAT). We use motion-compensated lifting steps to implement the temporal wavelet transform, which preserves invertibility, regardless of the motion model. By contrast, the invertibility requirement has restricted previous approaches to either block-based or global motion compensation. We show that the proposed framework effectively applies the temporal wavelet transform along a set of motion trajectories. An implementation demonstrates high coding gain from a finely embedded, scalable compressed bit-stream. Results also demonstrate the effectiveness of temporal wavelet kernels other than the simple Haar, and the benefits of complex motion modeling, using a deformable triangular mesh. These advances are either incompatible or difficult to achieve with previously proposed strategies for scalable video compression. Video sequences reconstructed at reduced frame-rates, from subsets of the compressed bit-stream, demonstrate the visually pleasing properties expected from low-pass filtering along the motion trajectories. The paper also describes a compact representation for the motion parameters, having motion overhead comparable to that of motion-compensated predictive coders. Our experimental results compare favorably to others reported in the literature, however, our principal objective is to motivate a new framework for highly scalable video compression.  相似文献   

10.
We introduce a highly scalable video compression system for very low bit-rate videoconferencing and telephony applications around 10-30 kbits/s. The video codec first performs a motion-compensated three-dimensional (3-D) wavelet (packet) decomposition of a group of video frames, and then encodes the important wavelet coefficients using a new data structure called tri-zerotrees (TRI-ZTR). Together, the proposed video coding framework forms an extension of the original zero tree idea of Shapiro (1992) for still image compression. In addition, we also incorporate a high degree of video scalability into the codec by combining the layered/progressive coding strategy with the concept of embedded resolution block coding. With scalable algorithms, only one original compressed video bit stream is generated. Different subsets of the bit stream can then be selected at the decoder to support a multitude of display specifications such as bit rate, quality level, spatial resolution, frame rate, decoding hardware complexity, and end-to-end coding delay. The proposed video codec also allows precise bit rate control at both the encoder and decoder, and this can be achieved independently of the other video scaling parameters. Such a scheme is very useful for both constant and variable bit rate transmission over mobile communication channels, as well as video distribution over heterogeneous multicast networks. Finally, our simulations demonstrated comparable objective and subjective performance when compared to the ITU-T H.263 video coding standard, while providing both multirate and multiresolution video scalability  相似文献   

11.
We investigate the implications of the conventional "t+2-D" motion-compensated (MC) three-dimensional (3-D) discrete wavelet/subband transform structure for spatial scalability and propose a novel flexible structure for fully scalable video compression. In this structure, any number of levels of "pretemporal" spatial wavelet decomposition are performed on the original full resolution frames, followed by MC temporal decomposition of the subbands within each spatial resolution level. Further levels of "posttemporal" spatial decomposition may be performed on the spatiotemporal subbands to provide additional levels of spatial scalability and energy compaction. This structure allows us to trade energy compaction against the potential for artifacts at reduced spatial resolutions. More importantly, the structure permits extensive study of the interaction between spatial aliasing, scalability and energy compaction. We show that where the motion model fails, the "t+2-D" structure inevitably produces misaligned spatial aliasing artifacts in reduced resolution sequences. These artifacts can be removed by using pretemporal spatial decomposition. On the other hand, we also show that the "t+2-D" structure necessarily maximizes compression efficiency. We propose different schemes to minimize the loss of compression efficiency associated with pretemporal spatial decomposition.  相似文献   

12.
Many alternative transforms have been developed recently for improved compression of images, intra prediction residuals or motion-compensated prediction residuals. In this paper, we propose alternative transforms for multiview video coding. We analyze the spatial characteristics of disparity-compensated prediction residuals, and the analysis results show that many regions have 1-D signal characteristics, similar to previous findings for motion-compensated prediction residuals. Signals with such characteristics can be transformed more efficiently with transforms adapted to these characteristics and we propose to use 1-D transforms in the compression of disparity-compensated prediction residuals in multiview video coding. To show the compression gains achievable from using these transforms, we modify the reference software (JMVC) of the multiview video coding amendment to H.264/AVC so that each residual block can be transformed either with a 1-D transform or with the conventional 2-D Discrete Cosine Transform. Experimental results show that coding gains ranging from about 1–15% of Bjontegaard-Delta bitrate savings can be achieved.  相似文献   

13.
A scalable video coder cannot be equally efficient over a wide range of bit rates unless both the video data and the motion information are scalable. We propose a wavelet-based, highly scalable video compression scheme with rate-scalable motion coding. The proposed method involves the construction of quality layers for the coded sample data and a separate set of quality layers for the coded motion parameters. When the motion layers are truncated, the decoder receives a quantized version of the motion parameters used to code the sample data. The effect of motion parameter quantization on the reconstructed video distortion is described by a linear model. The optimal tradeoff between the motion and subband bit rates is determined after compression. We propose two methods to determine the optimal tradeoff, one of which explicitly utilizes the linear model. This method performs comparably to a brute force search method, reinforcing the validity of the linear model itself. Experimental results indicate that the cost of scalability is small. In addition, considerable performance improvements are observed at low bit rates, relative to lossless coding of the motion information.  相似文献   

14.
New generations of video compression algorithms, such as those included in the under development High Efficiency Video Coding (HEVC) standard, provide substantially higher compression compared to their ancestors. The gain is achieved by improved prediction of pixels, both within a frame and between frames. Novel coding tools that contribute to the gain provide highly uncorrelated prediction residuals for which classical frequency decomposition methods, such as the discrete cosine transform, may not be able to supply a compact representation with few significant coefficients. To further increase the compression gains, this paper proposes transform skip modes which allow skipping one or both 1-D constituent transforms (i.e., vertical and horizontal), which is more suitable for sparse residuals. The proposed transform skip mode is tested in the HEVC codec and is able to provide bitrate reductions of up to 10% at the same objective quality when compared with the application of 2-D block transforms only. Moreover, the proposed transform skip mode outperforms the full transform skip currently investigated for possible adoption in the HEVC standard.  相似文献   

15.
This paper proposes a novel light field image compression approach with viewpoint scalability and random access functionalities. Although current state-of-the-art image coding algorithms for light fields already achieve high compression ratios, there is a lack of support for such functionalities, which are important for ensuring compatibility with different displays/capturing devices, enhanced user interaction and low decoding delay. The proposed solution enables various encoding profiles with different flexible viewpoint scalability and random access capabilities, depending on the application scenario. When compared to other state-of-the-art methods, the proposed approach consistently presents higher bitrate savings (44% on average), namely when compared to pseudo-video sequence coding approach based on HEVC. Moreover, the proposed scalable codec also outperforms MuLE and WaSP verification models, achieving average bitrate saving gains of 37% and 47%, respectively. The various flexible encoding profiles proposed add fine control to the image prediction dependencies, which allow to exploit the tradeoff between coding efficiency and the viewpoint random access, consequently, decreasing the maximum random access penalties that range from 0.60 to 0.15, for lenslet and HDCA light fields.  相似文献   

16.
In the Embedded Zerotree Wavelet (EZW) algorithm, a large number of bits are consumed in the encoding of Isolated Zero (IZ) symbols. This is the main bottleneck of the EZW algorithm, which limits its performance in terms of compression gain. To circumvent this limitation of the EZW algorithm, we propose in this paper, the Enhanced-EZW (E-EZW) algorithm based on the novel concept of a sparse tree (ST) encoding scheme. The ST encoding scheme provides an efficient encoding of ‘IZ’ symbols and eventually gives significant improvement in compression gain. Image features are clustered at various locations in an image, which gives rise to spatial correlation between Significant Coefficients (SCs) at these locations. Based on the above observation, we further propose differential coding of relative position of SCs in ST (DCORPS) in the E-EZW (DCORPS E-EZW) algorithm. We analyze cases where the ST coding gives higher coding gain compared to the EZW algorithm. Further, we see that DCORPS in sparse tree coding improves the overall coding efficiency of the E-EZW algorithm. By simulation results, we also demonstrate that the E-EZW and DCORPS E-EZW algorithms outperform two other important wavelet-based compression algorithms: namely set partitioning in hierarchical trees (SPIHT) and JPEG-2000 for a representative set of real-life images.  相似文献   

17.
一种基于二进制小波变换的无损图像编码算法   总被引:1,自引:0,他引:1  
该文提出了一种基于二进制小波变换的嵌入式无损图像编码算法渐进式分裂二进制小波树编码器(PPBWC)。PPBWC采用混合系数扫描方法按模值大小排序小波系数得到中间符号序列,通过非因果的自适应上下文条件编码考虑了不同频带小波系数的自相似特性,并利用待编码系数的未来信息提高了PPBWC的压缩编码性能。混合系数扫描和非因果的自适应上下文条件编码是PPBWC高效编码的主要因素。实验结果表明,与其它嵌入式无损算法相比,PPBWC具有最优的无损编码性能。  相似文献   

18.
Traditional video coders use the previous frame to perform motion estimation and compensation. Though they are less complex and have minimum coding delays, these coders lose their efficiency when subjected to scalability requirements. Recent 3D wavelet coders using lifting schemes offer high compression efficiency and scalability without significant loss in performance. The main drawback of 3D coders is that they process several frames at a time. This introduces additional delay, which makes them less suitable for real time applications.In this work, we propose a novel scheme to minimize drift in scalable wavelet based video coding, which gives a balanced performance between compression efficiency and reconstructed quality with less drift. Our drift control mechanism maintains two frame buffers in the encoder and decoder; one that is based on the base layer and one that is based on the base plus enhancement layers. Drift control is achieved by switching between these two buffers for motion estimation and compensation. Our prediction is initially based on the base plus enhancement layers buffer, which inherently introduces drift in the system if a part of the enhancement layer is not available at the receiver. A measure of drift is computed based on the channel information and a threshold is set. When the measure exceeds the threshold, i.e., when drift becomes significant, we switch the prediction to be based on the base layer buffer, which is always available to the receiver. We also developed an adaptive scheme with additional computation overhead at the encoder to decide the switching instance. The performance of the threshold case that needs fewer computations is comparable with the adaptive scheme. Our coder offers high compression efficiency and sustained video quality for variable bit rate wireless channels. This proves that we need not completely eliminate drift and decrease compression efficiency to get better received video quality.  相似文献   

19.
In this paper, we propose an adaptive multiview video coding scheme based on spatiotemporal correlation analyses using hierarchical B picture (AMVC‐HBP) for the integrative encoding performances, including high compression efficiency, low complexity, fast random access, and view scalability, by integrating multiple prediction structures. We also propose an in‐coding mode‐switching algorithm that enables AMVC‐HBP to adaptively select a better prediction structure in the encoding process without any additional complexity. Experimental results show that AMVC‐HBP outperforms the previous multiview video coding scheme based on H.264/MPEG‐4 AVC using the hierarchical B picture (MVC‐HBP) on low complexity for 21.5%, on fast random access for about 20%, and on view scalability for 11% to 15% on average. In addition, distinct coding gain can be achieved by AMVC‐HBP for dense and fast‐moving sequences compared with MVC‐HBP.  相似文献   

20.
Optimal hierarchical coding is sought, for progressive or scalable multidimensional signal transmission, by minimizing the variance of the error difference between the original image and its lower resolution renditions. The optimal, according to the above criterion, pyramidal coders are determined for images quantized using the optimal vector Lloyd-Max quantizers. A rigorous general statistical model of a vector Lloyd-Max quantizer is used, consisting of a linear time-invariant filter followed by additive noise uncorrelated with the input. Given arbitrary analysis filters, the optimal synthesis filters are found. The optimal analysis filters are subsequently determined, leading to formulas for globally optimal structures for pyramidal multidimensional signal decompositions. These structures produce replicas of the original image, which at lower resolutions retain as much similarity to the original as possible. This is highly useful for the progressive coding of two- or three-dimensional (2-D or 3-D) images needed in applications such as fast browsing through image databases. Furthermore, the minimization of the variance of the error image leads to minimization of the variance of the quantization noise for this image and, hence, to its optimally efficient compression. Experimental results illustrate the implementation and performance of the optimal pyramids in application for the coding of still 2-D images  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号