首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 28 毫秒
1.
This paper presents a Field Programmable Gate Array (FPGA) implementation for image/video compression using an improved block truncation coding (BTC) image compression technique. The improvement is achieved by employing a Hopfield neural network (HNN) to calculate a cost function upon which a block is classified as either a high- or a low-detail block. Accordingly, different blocks are coded with different bit rates and thus resulting in better compression ratios. The paper formulates the utilization of HNN within the BTC algorithm in such a way that a viable FPGA implementation is produced. The implementation exploits the inherent parallelism of the BTC/HNN algorithm to provide efficient algorithm-to-architecture mapping. The Xilinx VirtexE BTC implementation has shown to provide a processing speed of about 1.113 × 106 of pixels per second with a compression ratio which varies between 1.25 and 2 bits/pixel, according to the image nature.  相似文献   

2.
This study presents a simple method for high-hiding capacity. The basic concept uses a modulus operation. The proposed method has the following advantages: (1) the proposed method outperforms the simple LSB substitution method given the same range (0–m−1) of data digits in the embedded data; (2) the proposed method achieves good image vision quality without the need for post-processing; (3) the proposed method is almost as simple as the LSB method in both coding and decoding; (4) the smaller error advantage over the simple LSB substitution method can also be mathematically verified; (5) the error is smaller than (m−1)/2 for almost every pixel; (6) the proposed method has high-hiding capacity (for example, the proposed method can hide a 256×256 or 256×512 image in a 512×512 host image).  相似文献   

3.
We present a new technique for content based image retrieval using motif cooccurrence matrix (MCM). The MCM is derived from the motif transformed image. The whole image is divided into 2×2 pixel grids. Each grid is replaced by a scan motif that minimizes the local gradient while traversing the 2×2 grid forming a motif transformed image. The MCM is then defined as a 3D matrix whose (i,j,k) entry denotes the probability of finding a motif i at a distance k from the motif j in the transformed image. Conceptually, the MCM is quite similar to the color cooccurrence matrix (CCM), however, the retrieval using the MCM is better than the CCM since it captures the third order image statistics in the local neighborhood. Experiments confirm that the use of MCM considerably improves the retrieval performance.  相似文献   

4.
《Applied Soft Computing》2008,8(1):634-645
We proposed a vector quantization (VQ) with variable block size using local fractal dimensions (LFDs) of an image. A VQ with variable block size has so far been implemented using a quad tree (QT) decomposition algorithm. QT decomposition carries out image partitioning based on the homogeneity of local regions of an image. However, we think that the complexity of local regions of an image is more essential than the homogeneity, because we pay close attention to complex region than homogeneous region. Therefore, complex regions are essential for image compression. Since the complexity of regions of an image is quantified by values of LFD, we implemented variable block size using LFD values and constructed a codebook (CB) for a VQ. To confirm the performance of the proposed method, we only used a discriminant analysis and FGLA to construct a CB. Here, the FGLA is the algorithm to combine generalized Lloyd algorithm (GLA) and the fuzzy k means algorithm. Results of computational experiments showed that this method correctly encodes the regions that we pay close attention. This is a promising result for obtaining a well-perceived compressed image. Also, the performance of the proposed method is superior to that of VQ by FGLA in terms of both compression rate and decoded image quality. Furthermore, 1.0 bpp and more than 30 dB in PSNR by a CB with only 252 code-vectors were achieved using this method.  相似文献   

5.
This paper describes a divide-and-conquer Hough transform technique for detecting a given number of straight edges or lines in an image. This technique is designed for implementation on a pyramid of processors, and requires only O(log n) computational steps for an image of size n × n.  相似文献   

6.
为了提高现今被广泛使用的智能卡的安全性,本文提出了在智能卡中加入持卡人人像信息的方案,并且针对方案中由于智能卡容量的限制而引起的彩色人像信息的压缩问题提出了一种基于人像特征的分级压缩算法。该算法把无损压缩与有损压缩统一在小波变换的框架下,首先在图像中检测出人脸区域,然后对该区域采用无损压缩或低压缩比有损压缩,而对其它区域则采用高压缩比有损压缩。实验证明,该算法在压缩比和重建图像质量上取得了较好的效果。  相似文献   

7.
8.
目的 为了解决利用显著区域进行图像压缩已有方法中存在的对多目标的图像内容不能有效感知,从而影响重建图像的质量问题,提出一种基于多尺度深度特征显著区域检测图像压缩方法。方法 利用改进的卷积神经网络(CNNs),进行多尺度图像深度特征检测,得到不同尺度显著区域;然后根据输入图像尺寸自适应调整显著区域图的尺寸,同时引入高斯函数,对显著区域进行滤波,得到多尺度融合显著区域;最后结合编码压缩技术,对显著区域实行近无损压缩,非显著区域利用有损编码技术进行有损压缩,完成图像的压缩和重建工作。结果 提出的图像压缩方法较JPEG压缩方法,编码码率为0.39 bit/像素左右时,在数据集Kodak PhotoCD上,峰值信噪比(PSNR)提高了2.23 dB,结构相似性(SSIM)提高了0.024;在数据集Pascal Voc上,PSNR和SSIM两个指标分别提高了1.63 dB和0.039。同时,将提出的多尺度特征显著区域方法结合多级树集合分裂(SPIHT)和游程编码(RLE)压缩技术,在Kodak数据集上,PSNR分别提高了1.85 dB、1.98 dB,SSIM分别提高了0.006、0.023。结论 提出的利用多尺度深度特征进行图像压缩方法得到了较传统编码技术更好的结果,该方法通过有效地进行图像内容的感知,使得在图像压缩过程中,减少了图像内容损失,从而提高了压缩后重建图像的质量。  相似文献   

9.
刘伟  杨圣 《测控技术》2006,25(5):30-32
提出了一种基于JPEG/JPEG2000相结合的医学图像感兴趣区域压缩方法.该方法对在人为选定医学图像的感兴趣区域采用无损的JPEG2000压缩,而对其他图像区域则采用高压缩比的JPEG压缩,从而较好地解决了医学图像的高压缩比和高质量之间的矛盾.通过对一副人脑MRI医学图像的压缩实验,得到了压缩比12:1,并且病灶区图像信息完整的压缩图像.  相似文献   

10.
In this paper, a new approach to edge detection and image compression of bilevel error-diffused images is proposed. The proposed approach is directly applied to the error-diffused images without any inverse halftoning technique. The main idea behind the proposed edge detection method is to compute the consistency value of each pixel of the error-diffused image, and the computed values are then clustered into two classes to obtain the desired edges. Here, the consistency value is a function of the mass center and geometric center of the window; more precisely, it represents the possibility that the area covered by the window can be uniform. As for compression, each error-diffused input image is compressed by dividing the image into non-overlapping blocks with size 8 × 8, then each highly-consistent block is encoded by its average illumination, and each lowly-consistent block is transmitted directly using the original bitmap. The threshold to distinguish these two kinds of blocks is automatically calculated based on the compression rate required by the users. The complexity of our compression technique is low, and this fast method can be used in real-time application.  相似文献   

11.
The design of a fast, flexible and dynamically microprogrammable pipelined image processor is presented. The machine is especially suited, though not completely devoted, to perform local operations (up to 16 × 16) of both logical and arithmetic character on pictures, stored in a random access image memory in a 256 level grey scale. Separate parts of the machine take care of data manipulation and address generation. The machine's functioning is illustrated by discussing the way in which arithmetic N × N neighbourhood operations and binary 3 × 3 neighbourhood operations were implemented and finally the software supporting microprogram development and debugging and the run-time support software is described.  相似文献   

12.
In this paper a locally adaptive wavelet image coder is presented. It is based on an embedded human visual system model that exploits the space- and frequency-localization properties of wavelet decompositions for tuning the quantization step for each discrete wavelet transforms coefficient, according to the local properties of the image. A coarser quantization is performed in the areas of the image where the visibility of errors is reduced, thus decreasing the total bit rate without affecting the resulting visual quality. The size of the quantization step for each DWT coefficient is computed by taking into account the multiresolution structure of wavelet decompositions, so that there is no need for any side information to be sent to the decoder or for prediction mechanisms.Perceptually lossless as well as perceptually lossy compression is supported: the desired visual quality of the compressed image is set by means of a quality factor. Moreover, the technique for tuning the target visual quality allows the user to define arbitrarily shaped regions of interest and to set for each one a different quality factor.  相似文献   

13.
The fabrication process of a silicon condenser microphone and experimental results of the acoustic measurements are described. The microphone consists of two chips. One chip carries the 150 nm thick silicon nitride membrane, which has an area of 0.8 mm × 0.8 mm. The second chip contains the back electrode, the spacer and the contact pads of the microphone. In order to reduce the streaming resistances in the air gap, the back-electrode area is either structured with grooves by a plasma etching technique or with holes by an anisotropic etching technique. A frequency-independent sensitivity of 10 mV/Pa (open circuit, 1.8 mV/Pa measured) up to 30 kHz is obtained as a result of this structuring of the back-electrode area. Since the air-gap height is only 2 μm, the capacitance of the transducers ranges from 1 to 1.3 pF. The total size of the silicon microphone is 1.6 mm × 2 mm × 0.56 mm.  相似文献   

14.
基于感兴趣区的图象近无损压缩   总被引:1,自引:0,他引:1       下载免费PDF全文
无损感兴趣区(Lossless Region of Interest (ROI)图象压缩方法,即在感兴趣区采用无损压缩,而在其他区域采用有损压缩,从而保证了重要信息不丢失,并尽可能提高衅象的压缩比,在整形小波变换(IWT)和嵌入式零树编码的基础上,实现了无损感兴趣区(Lossless ROI)的图象近无损压缩,并提出了一种小波变换域形状编码算法--树映射形状编码,同时给出了算法的原理及实现,并进行了相关实验,实验结果表明,该算法能够提高压缩效率,且压缩效率取决于感兴趣区的大小以及对不感兴趣区图象的质量要求。  相似文献   

15.
面向任务的医学图象压缩   总被引:4,自引:0,他引:4       下载免费PDF全文
现代医学成象技术产生了大量的医学数字图象,而这些图象的存储和传输却存在很大问题,传统上,采用无损压缩编码方法改善这些图象的存储和传输效率,全为了达到较高的压缩比,必须采用有损压缩,然而,有损压缩会给图象带来失真,必须谨慎使用,医学图象通常,由二类区域构成,其中一类包含重要的诊断信息,由于其错误描述的代价非常高,因此提供一种高重的质量的压缩方法更加必要,另一类区域的信息较为次要,其压缩的目标则要求达到尽可能高的压缩比,为了既能保证感兴趣区图象的重构质量,又能获得较高压缩比,提出了一种面向任务的医学图象压缩算法,该方法把无损压缩和有损压缩统一在小波变换的框架下,对感兴趣区采用无损压缩,而对其他部分则采用有损压缩,实验证明,该压缩方法在压缩比和重建图象质量上均达到了较好的性能。  相似文献   

16.
Image segmentation is crucial for multimedia applications. Multimedia databases utilize segmentation for the storage and indexing of images and video. Image segmentation is used for object tracking in the new MPEG-7 video compression standard. It is also used in video conferencing for compression and coding purposes. These are only some of the multimedia applications in image segmentation. It is usually the first task of any image analysis process, and thus, subsequent tasks rely heavily on the quality of segmentation. The proposed method of color image segmentation is very effective in segmenting a multimedia-type image into regions. Pixels are first classified as either chromatic or achromatic depending on their HSI color values. Next, a seed determination algorithm finds seed pixels that are in the center of regions. These seed pixels are used in the region growing step to grow regions by comparing these seed pixels to neighboring pixels using the cylindrical distance metric. Merging regions that are similar in color is a final means used for segmenting the image into even smaller regions.  相似文献   

17.
The trade-off between image fidelity and coding rate is reached with several techniques, but all of them require an ability to measure distortion. The problem is that finding a general enough measure of perceptual quality has proven to be an elusive goal. Here, we propose a novel technique for deriving an optimal compression ratio for lossy coding based on the relationship between information theory and the problem of testing hypotheses. The best achievable compression ratio determines a boundary between achievable and non-achievable regions in the trade-off between source fidelity and coding rate. The resultant performance bound is operational in that it is directly achievable by a constructive procedure, as suggested in a theorem that states the relationship between the best achievable compression ratio and the Kullback-Leibler information gain. As an example of the proposed technique, we analyze the effects of lossy compression at the best achievable compression ratio on the identification of breast cancer microcalcifications.  相似文献   

18.

The need of human beings for better social media applications has increased tremendously. This increase has necessitated the need for a digital system with a larger storage capacity and more processing power. However, an increase in multimedia content size reduces the overall processing performance. This occurs because the process of storing and retrieving large files affects the execution time. Therefore, it is extremely important to reduce the multimedia content size. This reduction can be achieved by image and video compression. There are two types of image or video compression: lossy and lossless. In the latter compression, the decompressed image is an exact copy of the original image, while in the former compression, the original and the decompressed image differ from each other. Lossless compression is needed when every pixel matters. This can be found in autoimage processing applications. On the other hand, lossy compression is used in applications that are based on human visual system perception. In these applications, not every single pixel is important; rather, the overall image quality is important. Many video compression algorithms have been proposed. However, the balance between compression rate and video quality still needs further investigation. The algorithm developed in this research focuses on this balance. The proposed algorithm exhibits diversity of compression stages used for each type of information such as elimination of redundant and semi redundant frames, elimination by manipulating consecutive XORed frames, reducing the discrete cosine transform coefficients based on the wanted accuracy and compression ratio. Neural network is used to further reduce the frame size. The proposed method is a lossy compression type, but it can reach the near-lossless type in terms of image quality and compression ratio with comparable execution time.

  相似文献   

19.
杜时英 《计算机时代》2012,(8):24-25,28
提出了新的二进制(位级)无损图像压缩方法——将错误纠正BCH码引入到图像压缩算法中;将图像的二进制分为大小为7的码字,这些块进入到BCH解码器,消除了校验位后,使得原来的块的大小减少到4位。实验结果表明,此压缩算法是有效的,并给出了一个很好的压缩比,而且不丢失数据。BCH码的使用在提高压缩比方面比单纯霍夫曼压缩的结果要好。  相似文献   

20.
This paper presents a new lossy image compression technique which uses singular value decomposition (SVD) and wavelet difference reduction (WDR). These two techniques are combined in order for the SVD compression to boost the performance of the WDR compression. SVD compression offers very high image quality but low compression ratios; on the other hand, WDR compression offers high compression. In the Proposed technique, an input image is first compressed using SVD and then compressed again using WDR. The WDR technique is further used to obtain the required compression ratio of the overall system. The proposed image compression technique was tested on several test images and the result compared with those of WDR and JPEG2000. The quantitative and visual results are showing the superiority of the proposed compression technique over the aforementioned compression techniques. The PSNR at compression ratio of 80:1 for Goldhill is 33.37 dB for the proposed technique which is 5.68 dB and 5.65 dB higher than JPEG2000 and WDR techniques respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号