首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
In spite of great advancements in multimedia data storage and communication technologies, compression of medical data remains challenging. This paper presents a novel compression method for the compression of medical images. The proposed method uses Ripplet transform to represent singularities along arbitrarily shaped curves and Set Partitioning in Hierarchical Trees encoder to encode the significant coefficients. The main objective of the proposed method is to provide high quality compressed images by representing images at different scales and directions and to achieve high compression ratio. Experimental results obtained on a set of medical images demonstrate that besides providing multiresolution and high directionality, the proposed method attains high Peak Signal to Noise Ratio and significant compression ratio as compared with conventional and state-of-art compression methods.  相似文献   

2.
Image segmentation is one of the most important and fundamental tasks in image processing and techniques based on image thresholding are typically simple and computationally efficient. However, the image segmentation results depend heavily on the chosen image thresholding methods. In this paper, histogram is integrated with the Parzen window technique to estimate the spatial probability distribution of gray-level image values, and a novel criterion function is designed. By optimizing the criterion function, an optimal global threshold is obtained. The experimental results for synthetic real-world and images demonstrate the success of the proposed image thresholding method, as compared with the OTSU method, the MET method and the entropy-based method.  相似文献   

3.
自适应对称自回归模型的压缩图像内插方法   总被引:1,自引:0,他引:1       下载免费PDF全文
目的 大多数图像内插方法只考虑低分辨率图像的下采样降质过程,忽略编码噪声的影响。提出一种新的自适应对称自回归模型的压缩图像内插方法。方法 假设局部图像相似的图像块具有相同的图像内插模型。方法分为训练和重建两个阶段。在训练阶段,首先对训练图像采用主成分分析提取图像块的局部梯度主方向,根据方向进行一次分类,分别建立各个方向的对称自回归模型和训练集;其次对每个方向的训练集,根据图像基元特征,利用K均值聚类方法进行二次分类;最后对每个二次分类训练子集,选择其所属方向类的模型,使用有约束的最小二乘法估计对应于该子集的模型系数。在重建阶段,首先根据测试图像块的局部梯度主方向,确定方向类别,再计算测试块基元特征和该方向类中所有聚类中心的欧氏距离,选择具有最小欧氏距离的聚类中心的自回归模型用于内插。结果 采用8种不同的测试图像在JPEG的2种量化方式条件下进行测试,与7种典型的图像内插相比,结果表明本文方法能够有效地克服编码噪声的影响,峰值信噪比(PSNR)和结构相似度(SSIM)均优于其他方法。结论 本文方法具有较低的复杂度,可以适用于图像通信中增强图像的分辨率。  相似文献   

4.
In recent years the grey theorem has been successfully used in many prediction applications. The proposed Markov-Fourier grey model prediction approach uses a grey model to predict roughly the next datum from a set of most recent data. Then, a Fourier series is used to fit the residual error produced by the grey model. With the Fourier series obtained, the error produced by the grey model in the next step can be estimated. Such a Fourier residual correction approach can have a good performance. However, this approach only uses the most recent data without considering those previous data. In this paper, we further propose to adopt the Markov forecasting method to act as a longterm residual correction scheme. By combining the short-term predicted value by a Fourier series and a long-term estimated error by the Markov forecasting method, our approach can predict the future more accurately. Three time series are used in our demonstration. They are a smooth functional curve, a curve for the stock market and the Mackey-Glass chaotic time series. The performance of our approach is compared with different prediction schemes, such as back-propagation neural networks and fuzzy models. All these methods are one-step-ahead forecasting. The simulation results show that our approach can predict the future more accurately and also use less computational time than other methods do.  相似文献   

5.
This paper presents a new approach to determining the best design combination of product form elements for matching a given product image represented by a word pair. A grey relational analysis (GRA) model is used to examine the relationship between product form elements and product image, thus identifying the most influential elements of product form for a given product image. A grey prediction (GP) model and a neural network (NN) model are used individually and in conjunction with the GRA model, in order to predict and suggest the best form design combination. An experimental study on the form design of mobile phones is conducted to evaluate the performance of these models. Based on expert surveys, the concept of Kansei Engineering is used to extract and evaluate the experimental samples, and a morphological analysis is used to extract form elements from these sample mobile phones. The evaluation result shows that all the NN-based models outperform the GP-based models, suggesting that the NN model should be used to help product designers determine the best combination of form elements for achieving a desirable product image. The GRA model can be incorporated into the NN model to help designers focus on the most influential elements in form design of mobile phones.  相似文献   

6.
Multimedia Tools and Applications - Development in networking technology has made the remote diagnosis and treatment of patients a reality through telemedicine. At the same time, storing and...  相似文献   

7.
In this paper we present a novel hardware architecture for real-time image compression implementing a fast, searchless iterated function system (SIFS) fractal coding method. In the proposed method and corresponding hardware architecture, domain blocks are fixed to a spatially neighboring area of range blocks in a manner similar to that given by Furao and Hasegawa. A quadtree structure, covering from 32 × 32 blocks down to 2 × 2 blocks, and even to single pixels, is used for partitioning. Coding of 2 × 2 blocks and single pixels is unique among current fractal coders. The hardware architecture contains units for domain construction, zig-zag transforms, range and domain mean computation, and a parallel domain-range match capable of concurrently generating a fractal code for all quadtree levels. With this efficient, parallel hardware architecture, the fractal encoding speed is improved dramatically. Additionally, attained compression performance remains comparable to traditional search-based and other searchless methods. Experimental results, with the proposed hardware architecture implemented on an Altera APEX20K FPGA, show that the fractal encoder can encode a 512 × 512 × 8 image in approximately 8.36 ms operating at 32.05 MHz. Therefore, this architecture is seen as a feasible solution to real-time fractal image compression.
David Jeff JacksonEmail:
  相似文献   

8.
Neural Computing and Applications - Breast cancer is one of the significant tumor death in women. Computer-aided diagnosis (CAD) supports the radiologists in recognizing the irregularities in an...  相似文献   

9.
The paper deals with an image compression method using differential pulse-code modulation (DPCM) with an adaptive extrapolator capable of adjusting itself to local distinctions of image contours (boundaries). A negative effect of quantization on the optimization of the adaptive extrapolator is investigated. Even so the experiment has shown that the use of an adaptive extrapolator is more effective than the use of prototypes. We have studied the method as a whole with close consideration given to the coding of the quantized signal. The maximal error criterion and a Waterloo grey set of real patterns are used to compare the method with the JPEG technique.  相似文献   

10.
Picture compression algorithms, using a parallel structure of neural networks, have recently been described. Although these algorithms are intrinsically robust, and may therefore be used in high noise environments, they suffer from several drawbacks: high computational complexity, moderate reconstructed picture qualities, and a variable bit-rate. In this paper, we describe a simple parallel structure in which all three drawbacks are eliminated: the computational complexity is low, the quality of the decompressed picture is high, and the bit-rate is fixed.  相似文献   

11.
The goal of image compression is to remove the redundancies for minimizing the number of bits required to represent an image while steganography works by embedding the secret data in redundancies of the image in invisibility manner. Our focus in this paper is the improvement of image compression through steganography. Even if the purposes of digital steganography and data compression are by definition contradictory, we use these techniques jointly to compress an image. Hence, two schemes exploring this idea are suggested. The first scheme combines a steganographic algorithm with the baseline DCT-based JPEG, while the second one uses this steganographic algorithm with the DWT-based JPEG. In this study data compression is performed twice. First, we take advantage of energy compaction using JPEG to reduce redundant data. Second, we embed some bit blocks within its subsequent blocks of the same image with steganography. The embedded bits not only increase file size of the compressed image, but also decrease the file size further more. Experimental results show for this promising technique to have wide potential in image coding.  相似文献   

12.
In the study, a novel segmentation technique is proposed for multispectral satellite image compression. A segmentation decision rule composed of the principal eigenvectors of the image correlation matrix is derived to determine the similarity of image characteristics of two image blocks. Based on the decision rule, we develop an eigenregion-based segmentation technique. The proposed segmentation technique can divide the original image into some proper eigenregions according to their local terrain characteristics. To achieve better compression efficiency, each eigenregion image is then compressed by an efficient compression algorithm eigenregion-based eigensubspace transform (ER-EST). The ER-EST contains 1D eigensubspace transform (EST) and 2D-DCT to decorrelate the data in spectral and spatial domains. Before performing EST, the dimension of transformation matrix of EST is estimated by an information criterion. In this way, the eigenregion image may be approximated by a lower-dimensional components in the eigensubspace. Simulation tests performed on SPOT and Landsat TM images have demonstrated that the proposed compression scheme is suitable for multispectral satellite image.  相似文献   

13.
针对H.264视频编码标准中运动估计的高计算复杂度,提出了一种动态模式的快速运动估计算法。该算法通过判断宏块的运动大小及运动方向选择相应的搜索模式;同时对标准中的中值预测进行了改进并提出了一种动态的参考块提前跳过策略。实验结果表明,该算法在保持良好的率失真性能的基础上,减少了运动估计时间,相对于快速全搜索算法FFS以及UMHexagonS算法,该算法分别减少了85.28%和35.29%的运动估计时间。  相似文献   

14.
Multimedia Tools and Applications - In modern technological era image encryption has become an attractive and interesting field for researchers. They work for improving the security of image data...  相似文献   

15.
W.   《Journal of Systems Architecture》2008,54(10):983-994
Kohonen self-organizing map (K-SOM) has proved to be suitable for lossy compression of digital images. The major drawback of the software implementation of this technique is its very computational intensive task. Fortunately, the structure is fairly easy to convert into hardware processing units executing in parallel. The resulting hardware system, however, consumes much of a microchip’s internal resources, i.e. slice registers and look-up table units. This results in utilising more than a single microchip to realize the structure in pure hardware implementation. Previously proposed K-SOM realizations were mainly targetted on implementing on an application specific integrated circuit (ASIC) with low restriction on resource utilization. In this paper, we propose an alternative architecture of K-SOM suitable for moderate density FPGAs with acceptable image quality and frame rate. In addition, its hardware architecture and synthesis results are presented. The proposed K-SOM algorithm compromises between the image quality, the frame rate throughput, the FPGA’s resource utilization and, additionally, the topological relationship among neural cells within the network. The architecture has been proved to be successfully synthesized on a single moderate resource FPGA with acceptable image quality and frame rate.  相似文献   

16.
This paper presents a novel technique to discover double JPEG compression traces. Existing detectors only operate in a scenario that the image under investigation is explicitly available in JPEG format. Consequently, if quantization information of JPEG files is unknown, their performance dramatically degrades. Our method addresses both forensic scenarios which results in a fresh perceptual detection pipeline. We suggest a dimensionality reduction algorithm to visualize behaviors of a big database including various single and double compressed images. Based on intuitions of visualization, three bottom-up, top-down and combined top-down/bottom-up learning strategies are proposed. Our tool discriminates single compressed images from double counterparts, estimates the first quantization in double compression, and localizes tampered regions in a forgery examination. Extensive experiments on three databases demonstrate results are robust among different quality levels. F 1-measure improvement to the best state-of-the-art approach reaches up to 26.32 %. An implementation of algorithms is available upon request to fellows.  相似文献   

17.
H.264取得了很好的编码效率,但是也具有很高的计算复杂度。对H.264中的非对称十字形多层次六边形格点搜索算法(UMHexagonS)进行了优化,分别对提前终止阈值、搜索窗口大小以及搜索模式提出了3种动态模型,提高了算法的自适应性。对六种不同运动程度的视频序列进行了测试,实验结果表明,优化后的算法相对于原来的UMHexagonS算法平均减少了21.67%的编码时间以及47.49%的运动估计时间,同时只有0.02的峰值信噪比下降以及1.69%的比特率增加。  相似文献   

18.
Multilevel thresholding is one of the most important areas in the field of image segmentation. However, the computational complexity of multilevel thresholding increases exponentially with the increasing number of thresholds. To overcome this drawback, a new approach of multilevel thresholding based on Grey Wolf Optimizer (GWO) is proposed in this paper. GWO is inspired from the social and hunting behaviour of the grey wolves. This metaheuristic algorithm is applied to multilevel thresholding problem using Kapur's entropy and Otsu's between class variance functions. The proposed method is tested on a set of standard test images. The performances of the proposed method are then compared with improved versions of PSO (Particle Swarm Optimization) and BFO (Bacterial Foraging Optimization) based multilevel thresholding methods. The quality of the segmented images is computed using Mean Structural SIMilarity (MSSIM) index. Experimental results suggest that the proposed method is more stable and yields solutions of higher quality than PSO and BFO based methods. Moreover, the proposed method is found to be faster than BFO but slower than the PSO based method.  相似文献   

19.
Image authentication is becoming very important for certifying data integrity. A key issue in image authentication is the design of a compact signature that contains sufficient information to detect illegal tampering yet is robust under allowable manipulations. In this paper, we recognize that most permissible operations on images are global distortions like low-pass filtering and JPEG compression, whereas illegal data manipulations tend to be localized distortions. To exploit this observation, we propose an image authentication scheme where the signature is the result of an extremely low-bit-rate content-based compression. The content-based compression is guided by a space-variant weighting function whose values are higher in the more important and sensitive region. This spatially dependent weighting function determines a weighted norm that is particularly sensitive to the localized distortions induced by illegal tampering. It also gives a better compactness compared to the usual compression schemes that treat every spatial region as being equally important. In our implementation, the weighting function is a multifovea weighted function that resembles the biological foveated vision system. The foveae are salient points determined in the scale-space representation of the image. The desirable properties of multifovea weighted function in the wavelet domains fit nicely into our scheme. We have implemented our technique and tested its robustness and sensitivity for several manipulations.  相似文献   

20.
This paper describes a new adaptive coding technique to the coding of transform coefficients used in block based image compression schemes. The presence and orientation of the edge information in a sub-block are used to select different quantization tables and zigzag scan paths to cater for the local image pattern. Measures of the edge presence and edge orientation in a sub-block are calculated out of their DCT coefficients, and each sub-block can be classified into four different edge patterns. Experimental results show that compared to JPEG and the improved HVS-based coding, the new scheme has significantly increased the compression ratio without sacrificing the reconstructed image quality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号