共查询到20条相似文献,搜索用时 15 毫秒
1.
《Engineering Applications of Artificial Intelligence》2005,18(4):383-392
A novel method based on topology-preserving neural networks is used to implement vector quantization for medical image compression. The described method is an innovative image compression procedure, which differentiates itself from known systems in several ways. It can be applied to larger image blocks and represents better probability distribution estimation methods. A transformation-based operation is applied as part of the encoder on the block-decomposed image. The quantization process is performed by a “neural-gas” network which applied to vector quantization converges quickly to low distortion errors and reaches a distortion error lower than that resulting from Kohonen's feature map or the LBG algorithm. To study the efficiency of our algorithm, we blended mathematical phantom features into clinically proved cancer free mammograms. The influence of the neural compression method on the phantom features and the mammo-graphic image is not visually perceptible up to a high compression rate. 相似文献
2.
A two-level three-layer structured network is developed to estimate the moving-average model parameters based on second-order and third-order cumulant matching. The structured network is a multilayer feedforward network composed of linear summers in which the weights of these summers have a clear physical meaning. The first level is composed of random access memory units, which are used to control the connectivities of the second-level summers. The second level is composed of three layers of linear summers in which the weight of any summer represents the moving-average parameter to be estimated. The connectivities among these summers are controlled by the first-level memory units in such a way that the outputs of the second-level structured network equal the desired second-order or third-order statistics if the summer weights equal their corresponding true moving-average parameter values. Each second-order and third-order cumulant is viewed as a pattern which the structured network needs to learn, and a steepest-descent algorithm is proposed for training the structured network. The author also presents extensions to particular sorts of estimation, and results of simulations. 相似文献
3.
Mohsen Nasri Author VitaeAbdelhamid HelaliAuthor Vitae Halim Sghaier Author VitaeHassen Maaref Author Vitae 《Computers & Electrical Engineering》2011,37(5):798-810
When using wireless sensor networks for real-time image transmission, some critical points should be considered. These points are limited computational power, storage capability, narrow bandwidth and required energy. Therefore, efficient compression and transmission of images in wireless sensor network is considered. To address the above mentioned concerns, an efficient adaptive compression scheme that ensures a significant computational and energy reduction as well as communication with minimal degradation of the image quality is proposed. This scheme is based on wavelet image transform and distributed image compression by sharing the processing of tasks to extend the overall lifetime of the network. Simulation results are presented and they show that the proposed scheme optimizes the network lifetime, reduces significantly the amount of the required memory and minimizes the computation energy by reducing the number of arithmetic operations and memory accesses. 相似文献
4.
Lena Chang Author Vitae 《Pattern recognition》2004,37(6):1233-1243
In the study, a novel segmentation technique is proposed for multispectral satellite image compression. A segmentation decision rule composed of the principal eigenvectors of the image correlation matrix is derived to determine the similarity of image characteristics of two image blocks. Based on the decision rule, we develop an eigenregion-based segmentation technique. The proposed segmentation technique can divide the original image into some proper eigenregions according to their local terrain characteristics. To achieve better compression efficiency, each eigenregion image is then compressed by an efficient compression algorithm eigenregion-based eigensubspace transform (ER-EST). The ER-EST contains 1D eigensubspace transform (EST) and 2D-DCT to decorrelate the data in spectral and spatial domains. Before performing EST, the dimension of transformation matrix of EST is estimated by an information criterion. In this way, the eigenregion image may be approximated by a lower-dimensional components in the eigensubspace. Simulation tests performed on SPOT and Landsat TM images have demonstrated that the proposed compression scheme is suitable for multispectral satellite image. 相似文献
5.
Reza Jafari Djemel Ziou Mohammad Mehdi Rashidi 《Expert systems with applications》2013,40(17):6918-6927
The goal of image compression is to remove the redundancies for minimizing the number of bits required to represent an image while steganography works by embedding the secret data in redundancies of the image in invisibility manner. Our focus in this paper is the improvement of image compression through steganography. Even if the purposes of digital steganography and data compression are by definition contradictory, we use these techniques jointly to compress an image. Hence, two schemes exploring this idea are suggested. The first scheme combines a steganographic algorithm with the baseline DCT-based JPEG, while the second one uses this steganographic algorithm with the DWT-based JPEG. In this study data compression is performed twice. First, we take advantage of energy compaction using JPEG to reduce redundant data. Second, we embed some bit blocks within its subsequent blocks of the same image with steganography. The embedded bits not only increase file size of the compressed image, but also decrease the file size further more. Experimental results show for this promising technique to have wide potential in image coding. 相似文献
6.
为了克服无线多媒体传感器网络(WMSNs)中单个节点存储、处理能力和能量严重受限的问题,在研究双正交叠式变换(LBT)的基础上,提出了一种适用于无线多媒体传感器网络的基于LBT变换的分布式图像压缩算法。仿真结果表明,该算法具有低存储、低处理需求和极低能耗的特点,在传感器节点部署较为密集的情况下能成倍地提高网络生命周期。 相似文献
7.
The objective of the paper is the application of an adaptive constructive one-hidden-layer feedforward neural networks (OHL-FNNs) to image compression. Comparisons with fixed structure neural networks are performed to demonstrate and illustrate the training and the generalization capabilities of the proposed adaptive constructive networks. The influence of quantization effects as well as comparison with the baseline JPEG scheme are also investigated. It has been demonstrated through several experiments that very promising results are obtained as compared to presently available techniques in the literature. 相似文献
8.
Image authentication is becoming very important for certifying data integrity. A key issue in image authentication is the design of a compact signature that contains sufficient information to detect illegal tampering yet is robust under allowable manipulations. In this paper, we recognize that most permissible operations on images are global distortions like low-pass filtering and JPEG compression, whereas illegal data manipulations tend to be localized distortions. To exploit this observation, we propose an image authentication scheme where the signature is the result of an extremely low-bit-rate content-based compression. The content-based compression is guided by a space-variant weighting function whose values are higher in the more important and sensitive region. This spatially dependent weighting function determines a weighted norm that is particularly sensitive to the localized distortions induced by illegal tampering. It also gives a better compactness compared to the usual compression schemes that treat every spatial region as being equally important. In our implementation, the weighting function is a multifovea weighted function that resembles the biological foveated vision system. The foveae are salient points determined in the scale-space representation of the image. The desirable properties of multifovea weighted function in the wavelet domains fit nicely into our scheme. We have implemented our technique and tested its robustness and sensitivity for several manipulations. 相似文献
9.
10.
Feng Pan 《Pattern recognition letters》2002,23(14):1837-1845
This paper describes a new adaptive coding technique to the coding of transform coefficients used in block based image compression schemes. The presence and orientation of the edge information in a sub-block are used to select different quantization tables and zigzag scan paths to cater for the local image pattern. Measures of the edge presence and edge orientation in a sub-block are calculated out of their DCT coefficients, and each sub-block can be classified into four different edge patterns. Experimental results show that compared to JPEG and the improved HVS-based coding, the new scheme has significantly increased the compression ratio without sacrificing the reconstructed image quality. 相似文献
11.
《Image and vision computing》2001,19(9-10):649-668
Principal component analysis (PCA) is a well-known statistical processing technique that allows to study the correlations among the components of multivariate data and to reduce redundancy by projecting the data over a proper basis. The PCA may be performed both in a batch and in a recursive fashion; the latter method has been proven to be very effective in presence of high dimension data, as in image compression. The aim of this paper is to present a comparison of principal component neural networks for still image compression and coding. We first recall basic concepts related to neural PCA, then we recall from the scientific literature a number of principal component networks, and present comparisons about the structures, the learning algorithms and the required computational efforts, along with a discussion of the advantages and drawbacks related to each technique. The conclusion of our wide comparison among eight principal component networks is that the cascade recursive least-squares algorithm by Cichocki, Kasprzak and Skarbek exhibits the best numerical and structural properties. 相似文献
12.
Oehler K.L. Gray R.M. 《IEEE transactions on pattern analysis and machine intelligence》1995,17(5):461-473
We describe a method of combining classification and compression into a single vector quantizer by incorporating a Bayes risk term into the distortion measure used in the quantizer design algorithm. Once trained, the quantizer can operate to minimize the Bayes risk weighted distortion measure if there is a model providing the required posterior probabilities, or it can operate in a suboptimal fashion by minimizing the squared error only. Comparisons are made with other vector quantizer based classifiers, including the independent design of quantization and minimum Bayes risk classification and Kohonen's LVQ. A variety of examples demonstrate that the proposed method can provide classification ability close to or superior to learning VQ while simultaneously providing superior compression performance 相似文献
13.
An adaptive electronic neural network processor has been developed for high-speed image compression based on a frequency-sensitive self-organization algorithm. The performance of this self-organization network and that of a conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results. The neural network processor includes a pipelined codebook generator and a paralleled vector quantizer, which obtains a time complexity O(1) for each quantization vector. A mixed-signal design technique with analog circuitry to perform neural computation and digital circuitry to process multiple-bit address information are used. A prototype chip for a 25-D adaptive vector quantizer of 64 code words was designed, fabricated, and tested. It occupies a silicon area of 4.6 mmx6.8 mm in a 2.0 mum scalable CMOS technology and provides a computing capability as high as 3.2 billion connections/s. The experimental results for the chip and the winner-take-all circuit test structure are presented. 相似文献
14.
Triple-correlation-based neural networks are introduced and used in this paper for invariant classification of 2D gray scale images. Third-order correlations of an image are appropriately clustered, in spatial or spectral domain, to generate an equivalent image representation that is invariant with respect to translation, rotation, and dilation. An efficient implementation scheme is also proposed, which is robust to distortions, insensitive to additive noise, and classifies the original image using adequate neural network architectures applied directly to 2D image representations. Third-order neural networks are shown to be a specific category of triple-correlation-based networks, applied either to binary or gray-scale images. A simulation study is given, which illustrates the theoretical developments, using synthetic and real image data. 相似文献
15.
Binary wavelet transform (BWT) has several distinct advantages over the real wavelet transform (RWT), such as the conservation of alphabet size of wavelet coefficients, no quantization introduced during the transform and the simple Boolean operations involved. Thus, less coding passes are engaged and no sign bits are required in the compression of transformed coefficients. However, the use of BWT for the embedded grayscale image compression is not well established. This paper proposes a novel Context-based Binary Wavelet Transform Coding approach (CBWTC) that combines the BWT with a high-order context-based arithmetic coding scheme to embedded compression of grayscale images. In our CBWTC algorithm, BWT is applied to decorrelate the linear correlations among image coefficients without expansion of the alphabet size of symbols. In order to match up with the CBWTC algorithm, we employ the gray code representation (GCR) to remove the statistical dependencies among bi-level bitplane images and develop a combined arithmetic coding scheme. In the proposed combined arithmetic coding scheme, three highpass BWT coefficients at the same location are combined to form an octave symbol and then encoded with a ternary arithmetic coder. In this way, the compression performance of our CBWTC algorithm is improved in that it not only alleviate the degradation of predictability caused by the BWT, but also eliminate the correlation of BWT coefficients in the same level subbands. The conditional context of the CBWTC is properly modeled by exploiting the characteristics of the BWT as well as taking advantages of non-causal adaptive context modeling. Experimental results show that the average coding performance of the CBWTC is superior to that of the state-of-the-art grayscale image coders, and always outperforms the JBIG2 algorithm and other BWT-based binary coding technique for a set of test images with different characteristics and resolutions. 相似文献
16.
Sujitha Juliet Elijah Blessing Rajsingh Kirubakaran Ezra 《Journal of Real-Time Image Processing》2016,11(2):401-412
In spite of great advancements in multimedia data storage and communication technologies, compression of medical data remains challenging. This paper presents a novel compression method for the compression of medical images. The proposed method uses Ripplet transform to represent singularities along arbitrarily shaped curves and Set Partitioning in Hierarchical Trees encoder to encode the significant coefficients. The main objective of the proposed method is to provide high quality compressed images by representing images at different scales and directions and to achieve high compression ratio. Experimental results obtained on a set of medical images demonstrate that besides providing multiresolution and high directionality, the proposed method attains high Peak Signal to Noise Ratio and significant compression ratio as compared with conventional and state-of-art compression methods. 相似文献
17.
随着互联网的飞速发展,产生大量的图像信息。为了减小存储并提高图像质量,故提出了一种基于奇异值分解和Contourlet变化结合的有损图像压缩算法。该算法先对图像进行奇异值分解,根据奇异值对图像信号的贡献,选取适当的奇异值,来实现图像压缩,再对图像进行Contourlet 变换和量化,实现图像二级压缩。将该算法和图像奇异值分解直接压缩算法、Contourlet变换压缩算法进行实验比较,试验结果表明,该算法比图像奇异值分解直接压缩算法、Contourlet变换压缩算法有更好的性能,在同样的压缩比的情况下能获得更高的峰值信噪比和SSIM。 相似文献
18.
Low-complexity and energy efficient image compression scheme for wireless sensor networks 总被引:2,自引:0,他引:2
Currently most energy-constrained wireless sensor networks are designed with the object of minimizing the communication power at the cost of more computation. To achieve high compression efficiency, the main image compression algorithms used in wireless sensor networks are the high-complexity, state-of-the-art image compression standards, such as JPEG2000. These algorithms require complex hardware and make the energy consumption for computation comparable to communication energy dissipation. To reduce the hardware cost and the energy consumption of the sensor network, a low-complexity and energy efficient image compression scheme is proposed. The compression algorithm in the proposed scheme greatly lowers the computational complexity and reduces the required memory, while it still achieves required PSNR. The proposed implementation scheme of the image compression algorithm overcomes the computation and energy limitation of individual nodes by sharing the processing of tasks. And, it applies transmission range adjustment to save communication energy dissipation. Performance of the proposed scheme is investigated with respect to image quality and energy consumption. Simulation results show that it greatly prolongs the lifetime of the network under a specific image quality requirement. 相似文献
19.
Med Lassaad KaddachiAuthor Vitae Adel SoudaniAuthor Vitae 《Computer Standards & Interfaces》2012,34(1):14-23
In this paper, we present and evaluate a hardware solution for user-driven and packet loss tolerant image compression, especially designed to enable low power image compression and communication over wireless camera sensor networks (WCSNs). The proposed System-on-Chip is intended to be designed as a hardware coprocessor embedded in the camera sensor node. The goal is to relieve the node microcontroller of the image compression tasks and to achieve high-speed and low power image processing. The interest of our solution is twofold. First, compression settings can be changed at runtime (upon reception of a request message sent by an end user or according to the internal state of the camera sensor node). Second, the image compression chain includes a (block of) pixel interleaving scheme which significantly improves the robustness against packet loss in image communication. We discuss in depth the internal hardware architecture of the encoder chip which is planned to reach high performance running in FPGAs and in ASIC circuits. Synthesis results and relevant performance comparisons with related works are presented. 相似文献