共查询到20条相似文献,搜索用时 15 毫秒
1.
Wavelet coding of volumetric medical datasets 总被引:1,自引:0,他引:1
Schelkens P Munteanu A Barbarien J Galca M Giro-Nieto X Cornelis J 《IEEE transactions on medical imaging》2003,22(3):441-458
Several techniques based on the three-dimensional (3-D) discrete cosine transform (DCT) have been proposed for volumetric data coding. These techniques fail to provide lossless coding coupled with quality and resolution scalability, which is a significant drawback for medical applications. This paper gives an overview of several state-of-the-art 3-D wavelet coders that do meet these requirements and proposes new compression methods exploiting the quadtree and block-based coding concepts, layered zero-coding principles, and context-based arithmetic coding. Additionally, a new 3-D DCT-based coding scheme is designed and used for benchmarking. The proposed wavelet-based coding algorithms produce embedded data streams that can be decoded up to the lossless level and support the desired set of functionality constraints. Moreover, objective and subjective quality evaluation on various medical volumetric datasets shows that the proposed algorithms provide competitive lossy and lossless compression results when compared with the state-of-the-art. 相似文献
2.
G. Bologna G. Calvagno G. A. Mian R. Rinaldo 《Signal Processing: Image Communication》2000,15(10):891
In this work, we present a coding scheme based on a rate-distortion optimum wavelet packets decomposition and on an adaptive coding procedure that exploits spatial non-stationarity within each subband. We show, by means of a generalization of the concept of coding gain to the case of non-stationary signals, that it may be convenient to perform subband decomposition optimization in conjunction with intraband optimal bit allocation. In our implementation, each subband is partitioned into blocks of coefficients that are coded using a geometric vector quantizer with a rate determined on the basis of spatially local statistical characteristics. The proposed scheme appears to be simpler than other wavelet packets-based schemes presented in the literature and achieves good results in terms of both compression and visual quality. 相似文献
3.
Kyeong Ho Yang Seung Jun Lee Choong Woong Lee 《Signal Processing: Image Communication》1995,6(6):561-577
This paper proposes a new wavelet transform video coder which employs motion compensation, wavelet decomposition, and entropy-constrained vector quantization (ECVQ), in sequence. Each of layered subimages obtained from wavelet decomposition is segmented into basic blocks, and then the blocks are selectively encoded by ECVQ according to the energy of the samples. We introduce an efficient method to encode the map representing which blocks are encoded, based on inter-band prediction followed by a quadtree encoding. The proposed coder uses a simple forward analyzer in order to optimize the encoding parameters and introduces a preprocessing of signals which normalizes the input vectors of ECVQ in order to reduce the image-dependency of ECVQ codebooks. Simulation results show that our video coder provides good PSNR (peak-to-peak signal-to-noise ratio) performance and efficient rate control. 相似文献
4.
Michal Holtzman-Gazit Ron Kimmel Nathan Peled Dorith Goldsher 《IEEE transactions on image processing》2006,15(2):354-363
We present a new segmentation method for extracting thin structures embedded in three-dimensional medical images based on modern variational principles. We demonstrate the importance of the edge alignment and homogeneity terms in the segmentation of blood vessels and vascular trees. For that goal, the Chan-Vese minimal variance method is combined with the boundary alignment, and the geodesic active surface models. An efficient numerical scheme is proposed. In order to simultaneously detect a number of different objects in the image, a hierarchal approach is applied. 相似文献
5.
A. Ouled Zaid A. Makhloufi A. Bouallegue C. Olivier 《Signal, Image and Video Processing》2010,4(1):11-21
Digital watermarking can be used as data hiding technique to interleave medical images with patient information before transmitting
and storing applications. While digital image watermarking and lossy compression methods have been widely studied, much less
attention has been paid to their application in medical imaging situations, due partially to speculations on loss in viewer
performance caused by degradation of image information. This article describes an hybrid data hiding/compression system, adapted
to medical imaging. The central contribution is to integrate blind watermarking, based on turbo trellis-coded quantization,
to JP3D encoder. The latter meets conformity condition, with respect to its antecedents JPEG2000 coders. Thus, the watermark
embedding can be applied on two-dimensional as well as volumetric images. Results of our method applied to magnetic resonance
and computed tomography medical images have shown that our watermarking scheme is robust to JP3D compression attacks and can
provide relative high data embedding rate whereas keep a relative lower distortion. 相似文献
6.
A combined-transform coding (CTC) scheme is proposed to reduce the blocking artifact of conventional block transform coding and hence to improve the subjective performance. The proposed CTC scheme is described and its information-theoretic properties are investigated. Computer simulation results for a class of chest X-ray images are presented. A comparison between the CTC scheme and the conventional discrete cosine transform (DCT) and discrete Walsh-Hadamard transform (DWHT) demonstrates the performance improvement of the proposed scheme. In addition, combined coding can also be used in noiseless coding, yielding a slight improvement in the compression performance if it is used properly. 相似文献
7.
8.
The enormous data of volumetric medical images (VMI) bring a transmission and storage problem that can be solved by using a compression technique. For the lossy compression of a very long VMI sequence, automatically maintaining the diagnosis features in reconstructed images is essential. The proposed wavelet-based adaptive vector quantizer incorporates a distortion-constrained codebook replenishment (DCCR) mechanism to meet a user-defined quality demand in peak signal-to-noise ratio. Combining a codebook updating strategy and the well-known set partitioning in hierarchical trees (SPIHT) technique, the DCCR mechanism provides an excellent coding gain. Experimental results show that the proposed approach is superior to the pure SPIHT and the JPEG2000 algorithms in terms of coding performance. We also propose an iterative fast searching algorithm to find the desired signal quality along an energy-quality curve instead of a traditional rate-distortion curve. The algorithm performs the quality control quickly, smoothly, and reliably. 相似文献
9.
This paper presents a new lossy coding scheme based on 3D wavelet transform and lattice vector quantization for volumetric medical images. The main contribution of this work is the design of a new codebook enclosing a multidimensional dead zone during the quantization step which enables to better account correlations between neighbor voxels. Furthermore, we present an efficient rate–distortion model to simplify the bit allocation procedure for our intra-band scheme. Our algorithm has been evaluated on several CT- and MR-image volumes. At high compression ratios, we show that it can outperform the best existing methods in terms of rate–distortion trade-off. In addition, our method better preserves details and produces thus reconstructed images less blurred than the well-known 3D SPIHT algorithm which stands as a reference. 相似文献
10.
分区域的医学图像高容量无损信息隐藏方法 总被引:1,自引:0,他引:1
针对医学图像的分区域典型特征,提出一种基于区域和直方图平移的高容量无损信息隐藏方法。本方法用最大类间距分割法求得原始图像的前景区域,再用聚合多边形逼近和图像拟合法得到其数据嵌入区域。在数据嵌入过程中,提出利用差值直方图循环平移和基于编码的直方图平移方法分别在前景和背景区域嵌入数据,提高了原始直方图平移方法容量和解决了溢出问题。实验结果表明该方法总的嵌入容量可达1 bit/packet以上,并且隐秘图像质量在40dB左右,适用于具有区域特征的质量敏感图像的大容量信息隐藏。 相似文献
11.
We present a new technique for coding gray-scale images for facsimile transmission and printing on a laser printer. We use a gray-scale image encoder so that it is only at the receiver that the image is converted to a binary pattern and printed. The conventional approach is to transmit the image in halftoned form, using entropy coding (e.g. CCITT Group 3 or JBIG). The main advantages of the new approach are that we can get higher compression rates and that the receiver can tune the halftoning process to the particular printer. We use a perceptually based subband coding approach. It uses a perceptual masking model that was empirically derived for printed images using a specific printer and halftoning technique. In particular, we used a 300 dots/inch write-black laser printer and a standard halftoning scheme ("classical") for that resolution. For nearly transparent coding of gray-scale images, the proposed technique requires lower rates than the standard facsimile techniques. 相似文献
12.
The authors present a new technique for coding gray-scale images for facsimile transmission and printing on a laser printer. They use a gray-scale image encoder so that it is only at the receiver that the image is converted to a binary pattern and printed. The conventional approach is to transmit the image in halftoned form, using entropy coding (e.g., CCITT Group 3 or JBIG). The main advantages of the new approach are that one can get higher compression rates and that the receiver can tune the halftoning process to the particular printer. They use a perceptually based subband coding approach. It uses a perceptual masking model that was empirically derived for printed images using a specific printer and halftoning technique. In particular, they used a 300 dots/inch write-black laser printer and a standard halftoning scheme ("classical") for that resolution. For nearly transparent coding of gray-scale images, the proposed technique requires lower rates than the standard facsimile techniques. 相似文献
13.
Permuted smoothed descriptions and refinement coding for images 总被引:1,自引:0,他引:1
Ridge J. Ware F.W. Gibson J.D. 《Selected Areas in Communications, IEEE Journal on》2000,18(6):915-926
14.
If input queued switches can be designed to maintain two or more separate queues per input line, with each queue associated with a subset of the output addresses, throughputs exceeding the well known limit of 58.6% due to head-of-line (HOL) blocking effects can be obtained. The switch complexity is only O(N), not O(N2) as in some recent proposals. The author proposes three switching rules and presents simulation results which indicate that all three do better than the performance of a familiar theoretical baseline model 相似文献
15.
Diversity-and-multiplexing tradeoff and throughput of superposition coding relaying strategy 总被引:1,自引:1,他引:0
While the Network Coding cooperative relaying (NC-relaying) has the merit of high spectral ef-ficiency, Superposition Coding relaying (SC-relaying) has the merit of high throughput. In this paper, a novel concept, coded cooperative relaying, is presented, which is a unified scheme of the NC-relaying and SC-relaying. For the SC-relaying strategy which can be considered one-way coded relaying scheme with multi-access channel, the close-form solution of the outage probabilities of the basic signal and additional signal are obtained firstly. Secondly, the Diversity-and-Multiplexing Tradeoff (DMT) characteristics of ba-sic signal and additional signal are investigated entirely as well as the optimal close-form solutions. The compared numerical analysis shows the evaluation error of throughput based on the close-form solution is about 0.15 nats, which is within the acceptable error range. Due to the mutual effect between the both source signals, the available maximal values of the two multiplexing gains are less than 1. 相似文献
16.
本文研究了利用network coding的多速率多播最大吞吐量问题.与以往研究重点集中在单速率多播中的network coding研究工作不同,本文考虑了链路的异构性问题并采用多速率多播来解决该问题.首先文中形式化地描述了多速率多播最大可得吞吐量问题,并证明了在分层独立和层速率固定条件下,利用network coding的多速率多播最大吞吐量问题是NP-hard类问题,同时给出了最大吞吐量的上界.此外本文同时也研究了分层相关和层速率可变情况下的最大吞吐量问题. 相似文献
17.
Zhang Mu Zhang Shunyi 《电子科学学刊(英文版)》2006,23(4):584-589
This paper investigates the maximal achievable multi-rate throughput problem of a multicast session at the presence of network coding. Deviating from previous works which focus on single-rate network coding, our work takes the heterogeneity of sinks into account and provides multiple data layers to address the problem. Firstly formulated is the maximal achievable throughput problem with the assumption that the data layers are independent and layer rates are static. It is proved that the problem in this case is, unfortunately, Non-deterministic Polynomial-time (NP)-hard. In addition, our formulation is extended to the problems with dependent layers and dynamic layers. Furthermore, the approximation algorithm which satisfies certain fair- ness is proposed. 相似文献
18.
Su C.-K. Hsin H.-C. Lin S.-F. 《Vision, Image and Signal Processing, IEE Proceedings -》2005,152(6):752-756
A hybrid coding system that uses a combination of set partition in hierarchical trees (SPIHT) and vector quantisation (VQ) for image compression is presented. Here, the wavelet coefficients of the input image are rearranged to form the wavelet trees that are composed of the corresponding wavelet coefficients from all the subbands of the same orientation. A simple tree classifier has been proposed to group wavelet trees into two classes based on the amplitude distribution. Each class of wavelet trees is encoded using an appropriate procedure, specifically either SPIHT or VQ. Experimental results show that advantages obtained by combining the superior coding performance of VQ and efficient cross-subband prediction of SPIHT are appreciable for the compression task, especially for natural images with large portions of textures. For example, the proposed hybrid coding outperforms SPIHT by 0.38 dB in PSNR at 0.5 bpp for the Bridge image, and by 0.74 dB at 0.5 bpp for the Mandrill image. 相似文献
19.
According to the circle-packing theorem, the packing efficiency of a hexagonal lattice is higher than an equivalent square tessellation. Consequently, in several contexts, hexagonally sampled images compared to their Cartesian counterparts are better at preserving information content. In this paper, novel mapping techniques alongside the wavelet compression scheme are presented for hexagonal images. Specifically, we introduce two tree-based coding schemes, referred to as SBHex (spirally-mapped branch-coding for hexagonal images) and BBHex (breadth-first block-coding for hexagonal images). Both of these coding schemes respect the geometry of the hexagonal lattice and yield better compression results. Our empirical results show that the proposed algorithms for hexagonal images produce better reconstruction quality at low bits per pixel representations compared to the tree-based coding counterparts for the Cartesian grid. 相似文献
20.
This paper proposes a novel scheme of scalable coding for encrypted images. In the encryption phase, the original pixel values are masked by a modulo-256 addition with pseudorandom numbers that are derived from a secret key. After decomposing the encrypted data into a downsampled subimage and several data sets with a multiple-resolution construction, an encoder quantizes the subimage and the Hadamard coefficients of each data set to reduce the data amount. Then, the data of quantized subimage and coefficients are regarded as a set of bitstreams. At the receiver side, while a subimage is decrypted to provide the rough information of the original content, the quantized coefficients can be used to reconstruct the detailed content with an iteratively updating procedure. Because of the hierarchical coding mechanism, the principal original content with higher resolution can be reconstructed when more bitstreams are received. 相似文献