首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In spite of great advancements in multimedia data storage and communication technologies, compression of medical data remains challenging. This paper presents a novel compression method for the compression of medical images. The proposed method uses Ripplet transform to represent singularities along arbitrarily shaped curves and Set Partitioning in Hierarchical Trees encoder to encode the significant coefficients. The main objective of the proposed method is to provide high quality compressed images by representing images at different scales and directions and to achieve high compression ratio. Experimental results obtained on a set of medical images demonstrate that besides providing multiresolution and high directionality, the proposed method attains high Peak Signal to Noise Ratio and significant compression ratio as compared with conventional and state-of-art compression methods.  相似文献   

2.
Image segmentation is one of the most important and fundamental tasks in image processing and techniques based on image thresholding are typically simple and computationally efficient. However, the image segmentation results depend heavily on the chosen image thresholding methods. In this paper, histogram is integrated with the Parzen window technique to estimate the spatial probability distribution of gray-level image values, and a novel criterion function is designed. By optimizing the criterion function, an optimal global threshold is obtained. The experimental results for synthetic real-world and images demonstrate the success of the proposed image thresholding method, as compared with the OTSU method, the MET method and the entropy-based method.  相似文献   

3.
In recent years the grey theorem has been successfully used in many prediction applications. The proposed Markov-Fourier grey model prediction approach uses a grey model to predict roughly the next datum from a set of most recent data. Then, a Fourier series is used to fit the residual error produced by the grey model. With the Fourier series obtained, the error produced by the grey model in the next step can be estimated. Such a Fourier residual correction approach can have a good performance. However, this approach only uses the most recent data without considering those previous data. In this paper, we further propose to adopt the Markov forecasting method to act as a longterm residual correction scheme. By combining the short-term predicted value by a Fourier series and a long-term estimated error by the Markov forecasting method, our approach can predict the future more accurately. Three time series are used in our demonstration. They are a smooth functional curve, a curve for the stock market and the Mackey-Glass chaotic time series. The performance of our approach is compared with different prediction schemes, such as back-propagation neural networks and fuzzy models. All these methods are one-step-ahead forecasting. The simulation results show that our approach can predict the future more accurately and also use less computational time than other methods do.  相似文献   

4.
Multimedia Tools and Applications - Development in networking technology has made the remote diagnosis and treatment of patients a reality through telemedicine. At the same time, storing and...  相似文献   

5.
In this paper we present a novel hardware architecture for real-time image compression implementing a fast, searchless iterated function system (SIFS) fractal coding method. In the proposed method and corresponding hardware architecture, domain blocks are fixed to a spatially neighboring area of range blocks in a manner similar to that given by Furao and Hasegawa. A quadtree structure, covering from 32 × 32 blocks down to 2 × 2 blocks, and even to single pixels, is used for partitioning. Coding of 2 × 2 blocks and single pixels is unique among current fractal coders. The hardware architecture contains units for domain construction, zig-zag transforms, range and domain mean computation, and a parallel domain-range match capable of concurrently generating a fractal code for all quadtree levels. With this efficient, parallel hardware architecture, the fractal encoding speed is improved dramatically. Additionally, attained compression performance remains comparable to traditional search-based and other searchless methods. Experimental results, with the proposed hardware architecture implemented on an Altera APEX20K FPGA, show that the fractal encoder can encode a 512 × 512 × 8 image in approximately 8.36 ms operating at 32.05 MHz. Therefore, this architecture is seen as a feasible solution to real-time fractal image compression.
David Jeff JacksonEmail:
  相似文献   

6.
Neural Computing and Applications - Breast cancer is one of the significant tumor death in women. Computer-aided diagnosis (CAD) supports the radiologists in recognizing the irregularities in an...  相似文献   

7.
The paper deals with an image compression method using differential pulse-code modulation (DPCM) with an adaptive extrapolator capable of adjusting itself to local distinctions of image contours (boundaries). A negative effect of quantization on the optimization of the adaptive extrapolator is investigated. Even so the experiment has shown that the use of an adaptive extrapolator is more effective than the use of prototypes. We have studied the method as a whole with close consideration given to the coding of the quantized signal. The maximal error criterion and a Waterloo grey set of real patterns are used to compare the method with the JPEG technique.  相似文献   

8.
Picture compression algorithms, using a parallel structure of neural networks, have recently been described. Although these algorithms are intrinsically robust, and may therefore be used in high noise environments, they suffer from several drawbacks: high computational complexity, moderate reconstructed picture qualities, and a variable bit-rate. In this paper, we describe a simple parallel structure in which all three drawbacks are eliminated: the computational complexity is low, the quality of the decompressed picture is high, and the bit-rate is fixed.  相似文献   

9.
In the study, a novel segmentation technique is proposed for multispectral satellite image compression. A segmentation decision rule composed of the principal eigenvectors of the image correlation matrix is derived to determine the similarity of image characteristics of two image blocks. Based on the decision rule, we develop an eigenregion-based segmentation technique. The proposed segmentation technique can divide the original image into some proper eigenregions according to their local terrain characteristics. To achieve better compression efficiency, each eigenregion image is then compressed by an efficient compression algorithm eigenregion-based eigensubspace transform (ER-EST). The ER-EST contains 1D eigensubspace transform (EST) and 2D-DCT to decorrelate the data in spectral and spatial domains. Before performing EST, the dimension of transformation matrix of EST is estimated by an information criterion. In this way, the eigenregion image may be approximated by a lower-dimensional components in the eigensubspace. Simulation tests performed on SPOT and Landsat TM images have demonstrated that the proposed compression scheme is suitable for multispectral satellite image.  相似文献   

10.
The goal of image compression is to remove the redundancies for minimizing the number of bits required to represent an image while steganography works by embedding the secret data in redundancies of the image in invisibility manner. Our focus in this paper is the improvement of image compression through steganography. Even if the purposes of digital steganography and data compression are by definition contradictory, we use these techniques jointly to compress an image. Hence, two schemes exploring this idea are suggested. The first scheme combines a steganographic algorithm with the baseline DCT-based JPEG, while the second one uses this steganographic algorithm with the DWT-based JPEG. In this study data compression is performed twice. First, we take advantage of energy compaction using JPEG to reduce redundant data. Second, we embed some bit blocks within its subsequent blocks of the same image with steganography. The embedded bits not only increase file size of the compressed image, but also decrease the file size further more. Experimental results show for this promising technique to have wide potential in image coding.  相似文献   

11.
Multimedia Tools and Applications - In modern technological era image encryption has become an attractive and interesting field for researchers. They work for improving the security of image data...  相似文献   

12.
This paper presents a novel technique to discover double JPEG compression traces. Existing detectors only operate in a scenario that the image under investigation is explicitly available in JPEG format. Consequently, if quantization information of JPEG files is unknown, their performance dramatically degrades. Our method addresses both forensic scenarios which results in a fresh perceptual detection pipeline. We suggest a dimensionality reduction algorithm to visualize behaviors of a big database including various single and double compressed images. Based on intuitions of visualization, three bottom-up, top-down and combined top-down/bottom-up learning strategies are proposed. Our tool discriminates single compressed images from double counterparts, estimates the first quantization in double compression, and localizes tampered regions in a forgery examination. Extensive experiments on three databases demonstrate results are robust among different quality levels. F 1-measure improvement to the best state-of-the-art approach reaches up to 26.32 %. An implementation of algorithms is available upon request to fellows.  相似文献   

13.
W.   《Journal of Systems Architecture》2008,54(10):983-994
Kohonen self-organizing map (K-SOM) has proved to be suitable for lossy compression of digital images. The major drawback of the software implementation of this technique is its very computational intensive task. Fortunately, the structure is fairly easy to convert into hardware processing units executing in parallel. The resulting hardware system, however, consumes much of a microchip’s internal resources, i.e. slice registers and look-up table units. This results in utilising more than a single microchip to realize the structure in pure hardware implementation. Previously proposed K-SOM realizations were mainly targetted on implementing on an application specific integrated circuit (ASIC) with low restriction on resource utilization. In this paper, we propose an alternative architecture of K-SOM suitable for moderate density FPGAs with acceptable image quality and frame rate. In addition, its hardware architecture and synthesis results are presented. The proposed K-SOM algorithm compromises between the image quality, the frame rate throughput, the FPGA’s resource utilization and, additionally, the topological relationship among neural cells within the network. The architecture has been proved to be successfully synthesized on a single moderate resource FPGA with acceptable image quality and frame rate.  相似文献   

14.
Image authentication is becoming very important for certifying data integrity. A key issue in image authentication is the design of a compact signature that contains sufficient information to detect illegal tampering yet is robust under allowable manipulations. In this paper, we recognize that most permissible operations on images are global distortions like low-pass filtering and JPEG compression, whereas illegal data manipulations tend to be localized distortions. To exploit this observation, we propose an image authentication scheme where the signature is the result of an extremely low-bit-rate content-based compression. The content-based compression is guided by a space-variant weighting function whose values are higher in the more important and sensitive region. This spatially dependent weighting function determines a weighted norm that is particularly sensitive to the localized distortions induced by illegal tampering. It also gives a better compactness compared to the usual compression schemes that treat every spatial region as being equally important. In our implementation, the weighting function is a multifovea weighted function that resembles the biological foveated vision system. The foveae are salient points determined in the scale-space representation of the image. The desirable properties of multifovea weighted function in the wavelet domains fit nicely into our scheme. We have implemented our technique and tested its robustness and sensitivity for several manipulations.  相似文献   

15.
We describe a method of combining classification and compression into a single vector quantizer by incorporating a Bayes risk term into the distortion measure used in the quantizer design algorithm. Once trained, the quantizer can operate to minimize the Bayes risk weighted distortion measure if there is a model providing the required posterior probabilities, or it can operate in a suboptimal fashion by minimizing the squared error only. Comparisons are made with other vector quantizer based classifiers, including the independent design of quantization and minimum Bayes risk classification and Kohonen's LVQ. A variety of examples demonstrate that the proposed method can provide classification ability close to or superior to learning VQ while simultaneously providing superior compression performance  相似文献   

16.
17.
This paper describes a new adaptive coding technique to the coding of transform coefficients used in block based image compression schemes. The presence and orientation of the edge information in a sub-block are used to select different quantization tables and zigzag scan paths to cater for the local image pattern. Measures of the edge presence and edge orientation in a sub-block are calculated out of their DCT coefficients, and each sub-block can be classified into four different edge patterns. Experimental results show that compared to JPEG and the improved HVS-based coding, the new scheme has significantly increased the compression ratio without sacrificing the reconstructed image quality.  相似文献   

18.
Multilevel thresholding is one of the most important areas in the field of image segmentation. However, the computational complexity of multilevel thresholding increases exponentially with the increasing number of thresholds. To overcome this drawback, a new approach of multilevel thresholding based on Grey Wolf Optimizer (GWO) is proposed in this paper. GWO is inspired from the social and hunting behaviour of the grey wolves. This metaheuristic algorithm is applied to multilevel thresholding problem using Kapur's entropy and Otsu's between class variance functions. The proposed method is tested on a set of standard test images. The performances of the proposed method are then compared with improved versions of PSO (Particle Swarm Optimization) and BFO (Bacterial Foraging Optimization) based multilevel thresholding methods. The quality of the segmented images is computed using Mean Structural SIMilarity (MSSIM) index. Experimental results suggest that the proposed method is more stable and yields solutions of higher quality than PSO and BFO based methods. Moreover, the proposed method is found to be faster than BFO but slower than the PSO based method.  相似文献   

19.
The binary-image-compression problem is analyzed using irreducible cover of maximal rectangles. A bound on the minimum-rectangular-cover problem for image compression is given under certain conditions that previously have not been analyzed. It is demonstrated that for a simply connected image, the irreducible cover proposed uses less than four times the number of the rectangles in a minimum cover. With n pixels in a square, the parallel algorithm for obtaining the irreducible cover uses (n/log n) concurrent-read-exclusive write (CREW) processors in O(log n) time  相似文献   

20.
We propose a new algorithm for image compression based on compressive sensing (CS). The algorithm starts with a traditional multilevel 2-D Wavelet decomposition, which provides a compact representation of image pixels. We then introduce a new approach for rearranging the wavelet coefficients into a structured manner to formulate sparse vectors. We use a Gaussian random measurement matrix normalized with the weighted average Root Mean Squared energies of different wavelet subbands. Compressed sampling is finally performed using this normalized measurement matrix. At the decoding end, the image is reconstructed using a simple ?1-minimization technique. The proposed wavelet-based CS reconstruction, with the normalized measurement matrix, results in performance increase compared to other conventional CS-based techniques. The proposed approach introduces a completely new framework for using CS in the wavelet domain. The technique was tested on different natural images. We show that the proposed technique outperforms most existing CS-based compression methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号