首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In this paper a Human Visual System based adaptive quantization scheme is proposed. The proposed algorithm supports perceptually lossless as well as lossy compression. The algorithm uses a transform based compression approach using the wavelet transform, and has incorporated vision models for the compression of both luminance and chrominance components. The major strength of the coder is the incorporation of the vision model for the chrominance components and the optimum way in which the scales are distributed among the luminance and chrominance components to achieve higher compression ratios. The perceptual model developed for the color components gives flexibility for giving more compression for the color components without causing any color degradations. For each image the visual thresholds are evaluated and an optimum bit allocation is done in such a way that the quantization error is always less than the visual distortion for the given rate. To validate the strength of the proposed algorithm, the perceptual quality of the images reconstructed using the proposed coder is compared with the images reconstructed with JPEG2000 standard coder, for the same compression. To evaluate the perceptual quality of the compressed images latest perceptual quality matrices such as Structural Similarity Index, Visual Information Fidelity and Visual Signal-to-Noise Ratio are used. The results obtained reveal that the proposed structure gives excellent improvement in perceptual quality compared to the existing schemes, for both lossy as well as lossless compression. These advantages make the proposed algorithm a good candidate for replacing the quantizer stage of the current image compression standards.  相似文献   

2.
An image compression technique is proposed that attempts to achieve both robustness to transmission bit errors common to wireless image communication, as well as sufficient visual quality of the reconstructed images. Error robustness is achieved by using biorthogonal wavelet subband image coding with multistage gain-shape vector quantization (MS-GS VQ) which uses three stages of signal decomposition in an attempt to reduce the effect of transmission bit errors by distributing image information among many blocks. Good visual quality of the reconstructed images is obtained by applying genetic algorithms (GAs) to codebook generation to produce reconstruction capabilities that are superior to the conventional techniques. The proposed decomposition scheme also supports the use of GAs because decomposition reduces the problem size. Some simulations for evaluating the performance of the proposed coding scheme on both transmission bit errors and distortions of the reconstructed images are performed. Simulation results show that the proposed MS-GS VQ with good codebooks designed by GAs provides not only better robustness to transmission bit errors but also higher peak signal-to-noise ratio even under high bit error rate conditions  相似文献   

3.
目的 针对现有的加密域可逆信息隐藏算法在对位平面压缩时未能充分利用位平面间的相关性的问题,为了降低位平面的压缩率从而提高嵌入容量,提出一种减少相邻位平面间冗余度的加密域可逆信息隐藏算法。方法 算法将图像进行分块并将块的位置进行置乱,置乱并未改变位平面的块内像素的相关性,使得位平面的块同样利于压缩。将块置乱后的图像的高位平面与次高位进行异或操作后得到新的次高位平面,再用新的次高位异或比它低一位的位平面。依次对其余的低位平面进行同样的操作后得到新的低7个位平面,将它们与原始最高位相结合得到新的图像的8个位平面。使用BBE(binary-block embeding)算法对新的图像的位平面进行压缩为嵌入信息腾出空间。为了保证加密图像的安全性,对腾出空间后的图像进行异或加密。结果 对相邻位平面进行异或后使除了最高位平面外的低位平面更平滑,减少了不能使用BBE算法压缩的块及压缩的不好的块的个数,更有利于用BBE算法对图像进行压缩。提出的算法与现有的基于位平面压缩的算法相比得到了较高的嵌入率,对不同纹理的图像而言,嵌入的容量平均提高了0.4 bit/像素。结论 实验结果表明,提出的算法在保证安全性的同时可以腾出更多的空间来嵌入额外的信息,在实际生活中能根据需求灵活地嵌入信息。嵌入的信息能无损地提取,且图像能完全恢复。总的来说,提出的算法具有良好的性能。  相似文献   

4.
基于自适应稀疏变换的指纹图像压缩   总被引:1,自引:0,他引:1  
随着指纹识别技术的广泛应用,大量指纹图像需要被收集和存储.在指纹识别系统中,对于大容量的指纹数据库,指纹图像必须经过压缩后存储以减少存储空间,本文提出了基于自适应稀疏变换的指纹图像压缩算法.该算法在离线状态下提取指纹图像特征训练超完备字典;在编码过程中,首先利用差分预测编码和稀疏变换将待压缩指纹图像转换到稀疏域,然后对直流系数和稀疏表达系数进行量化和熵编码,从而实现图像信息的压缩.实验表明,在中低码率段,本文算法相比于JPEG、JPEG2000和WSQ等主流压缩算法表现出更优越的率失真性能;在相同码率时,本文算法生成的压缩图像的主观视觉效果更好,指纹识别率更高.  相似文献   

5.
In this paper, we propose a two-dimensional histogram equalization (2DHE) algorithm which utilizes contextual information around each pixel to enhance the contrast of an input image. The algorithm is based on the observation that the contrast in an image can be improved by increasing the grey-level differences between each pixel and its neighbouring pixels. The image equalization is achieved by assuming that for a given image, the modulus of the grey-level differences between pixels and their neighbouring pixels are equally distributed. The well-known global histogram equalization algorithm is a special case of 2DHE when contextual information is not utilized. 2DHE is easy to implement requiring only a small number of simple arithmetic operations and is thus suitable for real-time contrast enhancement applications. Experimental results show that 2DHE produces better or comparable enhanced images than several state-of-the-art algorithms. The only parameter in 2DHE which requires tuning is the size of the spatial neighbourhood support which provides the contextual information for a given dynamic range of the enhanced image. An automated parameter selection algorithm is also presented. The algorithm can be applied to a wide range of image types.  相似文献   

6.
This paper presents an automatic multiple-scale algorithm for delineation of individual tree crowns in high spatial resolution infrared colour aerial images. The tree crown contours were identified as zero-crossings, with convex grey-level curvature, which were computed on the intensity image for each image scale. A modified centre of curvature was estimated for every edge segment pixel. For each segment, these centre points formed a swarm which was modelled as a primal sketch using an ellipse extended with the mean circle of curvature. The model described the region of the derived tree crown based on the edge segment at the current scale. The sketch was rescaled with a significance value and accumulated for a scale interval. In the accumulated sketch, a tree crown segment was grown, starting at local peaks, under the condition that it was inside the area of healthy vegetation in the aerial image and did not trespass into a neighbouring crown segment. The method was evaluated by comparison with manual delineation and with ground truth on 43 randomly selected sample plots. It was concluded that the performance of the method is almost equivalent to visual interpretation. On the average, seven out of ten tree crowns were the same. Furthermore, ground truth indicated a large number of hidden trees. The proposed technique could be used as a basic tool in forest surveys. Received: 24 June 1997 / Accepted: 28 April 1998  相似文献   

7.
针对传统大动态范围图像数据压缩方法易受场景变化影响,量化后的8位显示图像整体模糊、图像细节和弱小目标丢失问题,提出了一种基于直方图重建图像细节增强算法。对图像直方图统计值进行重新赋值,保留图像中出现的细节部分,并缩小相邻灰度级间间隔;采用二维Gabor滤波器来模拟视觉感知系统,将Gabor滤波器与图像进行卷积运算,得到滤波后的平滑图像;采用局部对比度增强方法来增强图像细节部分,并将增强后的中间结果线性映射为8位显示图像。实验结果表明,与其他大动态范围数据压缩方法相比,该算法量化后图像清晰度高,无图像细节和目标丢失现象。  相似文献   

8.
This paper describes a progressive data compression algorithm for subsea gray-level image (data) transmission through a low rate ultrasonic link in a telerobotics system. The proposed image compression algorithm is based on JPEG, and has been modified for a specific subsea application, where the communication bit rate is confined to 100–200 bits/s and a frequent updating of reconstructed images is required.The experimental result based on 23 real imagesshows that the proposed image data compression algorithmperforms better than JPEG when images are reconstructed from asmall amount of data. The transmission error effect andcomputational complexity have also been analysed with respect tothe proposed algorithm.  相似文献   

9.
为缓解无线胶囊内镜图像在电子设备以及服务器中的存储压力,提出一种自适应不规则纹理的无损压缩算法。在图像块内,利用扩展角度预测模式寻找与待预测像素最邻近的5个参考像素,并给其中3个参考像素分配不同权重,同时根据邻近像素值梯度变化规律,扩大待预测像素在不规则纹理方向上的预测值选择范围,基于图像块的最小信息熵选择最优的预测值,将真实值与预测值作差获得预测残差,以适应不规则纹理图像。利用跨分量预测模式选择最优的预测系数,构建符合图像块内预测残差分布规律的线性关系,从而消除当前编码像素中3个分量的冗余数据。结合Deflate算法对经多角度预测模式与跨分量预测模式预测后的剩余残差进行熵编码。实验结果表明,该算法在Kvasir-Capsule数据集上的无损压缩比平均为5.81,相比WebP、SAP、MDIP等算法,具有较优的压缩性能,能够有效提高图像的冗余消除率,其中相较WebP算法的冗余消除率提高约1.9%。  相似文献   

10.
An encoding/decoding technique and a computer architecture for progressive refinement of 3-D images is suggested. The method is based on a binary tree representation of grey-level images. A scheme to transform an N × N × N image array into a sequence of at most N × N × N elements is given. As more elements of the sequence are scanned finer resolution representations of the image are obtained. The proposed architecture, which is suitable for VLSI implementation, performs the transformations between the image and its encoding using O(log N) processors and in time and space proportional to the image size.  相似文献   

11.
李添正  王春桃 《计算机应用》2020,40(5):1354-1363
尽管当前已有众多二值图像的压缩方法,但这些方法并不能直接应用于加密二值图像的压缩。在云计算、分布式处理等场景下,如何高效地对加密二值图像进行有损压缩仍然是一个挑战,而当前鲜有这方面的研究。针对此问题,提出了一种基于马尔可夫随机场(MRF)的加密二值图像有损压缩算法。该算法用MRF表征二值图像的空域统计特性,进而借助MRF及解压缩还原的像素推断加密二值图像压缩过程中被丢弃的像素。所提算法的发送方采用流密码对二值图像进行加密,云端先后利用分块均匀但块内随机的下抽样方式及低密度奇偶校验(LDPC)编码对加密二值图像进行压缩,接收方则通过构造包含解码、解密及MRF重构的联合因子图实现二值图像的有损重构。实验结果表明,所提算法获得了较好的压缩效率,在0.2~0.4 bpp压缩率时有损重构图像的比特误差率(BER)不超过5%;而与针对未加密原始二值图像的国际压缩标准JBIG2的压缩效率相比,所提算法的压缩效率与其相当。这些充分表明了所提算法的可行性与有效性。  相似文献   

12.
Determining Image Origin and Integrity Using Sensor Noise   总被引:5,自引:0,他引:5  
In this paper, we provide a unified framework for identifying the source digital camera from its images and for revealing digitally altered images using photo-response nonuniformity noise (PRNU), which is a unique stochastic fingerprint of imaging sensors. The PRNU is obtained using a maximum-likelihood estimator derived from a simplified model of the sensor output. Both digital forensics tasks are then achieved by detecting the presence of sensor PRNU in specific regions of the image under investigation. The detection is formulated as a hypothesis testing problem. The statistical distribution of the optimal test statistics is obtained using a predictor of the test statistics on small image blocks. The predictor enables more accurate and meaningful estimation of probabilities of false rejection of a correct camera and missed detection of a tampered region. We also include a benchmark implementation of this framework and detailed experimental validation. The robustness of the proposed forensic methods is tested on common image processing, such as JPEG compression, gamma correction, resizing, and denoising.  相似文献   

13.
The basic goal of medical image compression is to reduce the bit rate and enhance the compression efficiency for the transmission and storage of the medical imagery while maintaining an acceptable diagnostic image quality. Because of the storage, transmission bandwidth, picture archiving and communication constraints and the limitations of the conventional compression methods, the medical imagery need to be compressed selectively to reduce the transmission time and storage cost along with the preservance of the high diagnostic quality. The other important reason of context based medical image compression is the high spatial resolution and contrast sensitivity requirements. In medical images, contextual region is an area which contains the most useful and important information and must be coded carefully without appreciable distortion. A novel scheme for context based coding is proposed here and yields significantly better compression rates than the general methods of JPEG and JPEG2K. In this proposed method the contextual part of the image is encoded selectively on the high priority basis with a very low compression rate (high bpp) and the background of the image is separately encoded with a low priority and a high compression rate (low bpp) and they are re-combined for the reconstruction of the image. As a result, high over all compression rates, better diagnostic image quality and improved performance parameters (CR, MSE, PSNR and CoC) are obtained. The experimental results have been compared to the scaling, Maxshift, implicit and EBCOT methods on ultrasound medical images and it is found that the proposed algorithm gives better and improved results.  相似文献   

14.
15.
针对星载遥感图像自相关系数较大时MED预测比较理想,反之均值Mean预测更有优势的问题,融合该两种方法的优点提出了改进的MED预测,使之对多种类别的图像特别是遥感图像能够取得较好的预测效果.此外,从预测相关性、运算复杂度、是否带有自动误码纠偏以及采用Rice算法获得的压缩比等角度进行了两种预测残差映射器的研究,确定了选用带自动误码纠偏的差值映射和可调整的预测方式.当信源干扰多大时选取前像素预测,反之采用MED或改进的MED预测,从而兼顾了压缩比和抗误码两方面,且硬件可实现.  相似文献   

16.
常规的小波压缩算法在低比特率情况下将不可避免地在图像强边缘附近产生振铃效应。为此提出了一种基于边缘自适应小波变换的低比特率图像压缩算法。在编码端,先检测出图像的强边缘并将其作为附加信息进行编码;然后,利用强边缘信息将图像沿行列方向分割成一些独立的数据段分别进行小波变换;最后,利用EBCOT算法对得到的小波系数进行编码。特别地,从图像的成像机理出发,提出了一种克服分段数据边界效应的新方法。实验结果表明,这种边缘自适应小波变换即使在比特率极低的情况下也可以保持图像轮廓的清晰,强边缘附近的振铃效应也得到有效的抑制。由于附加信息的存在,压缩图像的PSNR值相比于常规方法通常会有所降低,但图像的主观视觉质量却有明显的提高。  相似文献   

17.
A.  M.  Sabah M.   《Digital Signal Processing》2003,13(4):604-622
This paper describes a new algorithm for electrocardiogram (ECG) compression. The main goal of the algorithm is to reduce the bit rate while keeping the reconstructed signal distortion at a clinically acceptable level. It is based on the compression of the linearly predicted residuals of the wavelet coefficients of the signal. In this algorithm, the input signal is divided into blocks and each block goes through a discrete wavelet transform; then the resulting wavelet coefficients are linearly predicted. In this way, a set of uncorrelated transform domain signals is obtained. These signals are compressed using various coding methods, including modified run-length and Huffman coding techniques. The error corresponding to the difference between the wavelet coefficients and the predicted coefficients is minimized in order to get the best predictor. The method is assessed through the use of percent root-mean square difference (PRD) and visual inspection measures. By this compression method, small PRD and high compression ratio with low implementation complexity are achieved. Finally, we have compared the performance of the ECG compression algorithm on data from the MIT-BIH database.  相似文献   

18.
In this paper, we deal with those applications of textual image compression where high compression ratio and maintaining or improving the visual quality and readability of the compressed images are of main concern. In textual images, most of the information exists in the edge regions; therefore, the compression problem can be studied in the framework of region-of-interest (ROI) coding. In this paper, the Set Partitioning in Hierarchical Trees (SPIHT) coder is used in the framework of ROI coding along with some image enhancement techniques in order to remove the leakage effect which occurs in the wavelet-based low-bit-rate compression. We evaluated the compression performance of the proposed method with respect to some qualitative and quantitative measures. The qualitative measures include the averaged mean opinion scores (MOS) curve along with demonstrating some outputs in different conditions. The quantitative measures include two proposed modified PSNR measures and the conventional one. Comparing the results of the proposed method with those of three conventional approaches, DjVu, JPEG2000, and SPIHT coding, showed that the proposed compression method considerably outperformed the others especially from the qualitative aspect. The proposed method improved the MOS by 20 and 30 %, in average, for high- and low-contrast textual images, respectively. In terms of the modified and conventional PSNR measures, the proposed method outperformed DjVu and JPEG2000 up to 0.4 dB for high-contrast textual images at low bit rates. In addition, compressing the high contrast images using the proposed ROI technique, compared to without using this technique, improved the average textual PSNR measure up to 0.5 dB, at low bit rates.  相似文献   

19.
《Image Processing, IET》2007,1(3):295-303
Two general methods are proposed for the arbitrary L/M-fold resizing of compressed images using lapped transforms. These methods provide general resizing frameworks in the transform domain. In order to analyse the image size conversion technique using lapped transforms, the impulse responses and frequency responses for the overall system are derived and analysed. This theoretical analysis shows that the image resizing in the lapped transform domain provides much better characteristics than the conventional image resizing in the discrete cosine transform (DCT) domain. Specifically, by using lapped transforms, the proposed method reduces the blocking effect to a very low level for images which are compressed at low bit rates, since two neighbouring DCT blocks are used in the conversion process, instead of processing each block independently.  相似文献   

20.
This paper proposes a compression scheme for face profile images based on three stages, modelling, transformation, and the partially predictive classified vector quantization (CVQ) stage. The modelling stage employs deformable templates in the localisation of salient features of face images and in the normalization of the image content. The second stage uses a dictionary of feature-bases trained for profile face images to diagonalize the image blocks. At this stage, all normalized training and test images are spatially clustered (objectively) into four subregions according to their energy content, and the residuals of the most important clusters are further clustered (subjectively) in the spectral domain, to exploit spectral redundancies. The feature-basis functions are established with the region-based Karhunen–Loeve transform (RKLT) of clustered image blocks. Each image block is matched with a representative of near-best basis functions. A predictive approach is employed for mid-energy clusters, in both stages of search for a basis and for a codeword from the range of its cluster. The proposed scheme employs one stage of a cascaded region-based KLT-SVD and CVQ complex, followed by residual VQ stages for subjectively important regions. The first dictionary of feature-bases is dedicated to the main content of the image and the second is dedicated to the residuals. The proposed scheme is experimented in a set of human face images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号