首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 62 毫秒
1.
基于DCT压缩的JPEG图像检索算法   总被引:1,自引:1,他引:1       下载免费PDF全文
刘瑞祥  赵珊  鲍泓 《计算机工程》2010,36(5):225-227
提出一种JPEG图像检索算法,将熵解码后的DC系数差本地二值化,提取DC差向量来体现原始图像中像素的灰度分布。考虑到每个DCT块中的AC系数对图像内容的表征能力,针对每个DCT块选择前9个最大的AC系数,构造AC系数特征向量用于检索。实验结果表明,该算法具有较好的检索效果。  相似文献   

2.
基于内容的检索是近年来的研究热点之一,现在已有许多基于象素域的图像检索技术,目前数据压缩也已成为多媒体应用的标准模式,静态图像压缩主要采用JPEG技术,研究基于传统JPEG和JPEG2000的图像检索方法成为必然。本文综述近年来出现的基于JPEG核心算法离散余弦变换和JPEG2000核心算法离散小波变换的图像检索技术。  相似文献   

3.
基于DCT系数的JPEG图像检索算法   总被引:1,自引:0,他引:1       下载免费PDF全文
赵珊  赵倩 《计算机工程》2010,36(19):190-192
针对JPEG压缩标准中直流(DC)系数和交流(AC)系数表征图像内容信息的不同,提出一种基于离散余弦变换(DCT)系数空间分布的检索算法。构造具有旋转不变性的DC系数差向量描述图像特征,根据DCT块中AC系数量化后的特点,提取AC系数分布熵表征图像内容,设置权值函数避免由于AC系数分布熵相同、空间分布不同造成的误检和漏检情况。实验结果表明,该算法不需要完全解压缩、计算复杂度低,能较好体现图像的内容分布。  相似文献   

4.
张问银  聂真理 《计算机应用》2005,25(10):2354-2355
给出了基于DCT压缩域显著点的快速图像检索方法,该方法直接利用少量DCT非零系数快速计算图像显著点,不需要完全解压缩,降低了计算复杂度,并且具有较强的鲁棒性。实验结果表明,基于显著点提取的图像特征表征能力强,适于图像检索。  相似文献   

5.
互联网上存在大量图像信息,如何有效地对图像资源进行组织并检索到用户所需的图像,便成为人们研究的课题。随着新一代压缩标准JPEG2000的逐渐普及,互联网上采用此标准编码的图像文件jp2会大量涌现,并最终会取代现有的JPEG标准的图像。在此背景下,提出了一种简单快速的基于JPEG2000图像压缩域的检索算法,特征提取综合考虑形状、颜色、纹理和空间信息。所有的特征在解压缩过程中,熵解码之后获得。实验结果表明,相对于传统的基于像素域的图像检索,此算法具有很高的检索效率,而且算法相对简单。  相似文献   

6.
基于压缩域的图像检索技术   总被引:21,自引:0,他引:21  
李晓华  沈兰荪 《计算机学报》2003,26(9):1051-1059
图像检索技术是多媒体应用中的关键技术,现有的基于内容图像检索技术大都是基于非压缩域的,对于目前普遍存在的压缩格式图像,采用这种技术必须先解压再检索,不但计算量大,而且需占用较多的中介存储空间,所以严重影响了检索系统的实时性和灵活性,同时各种压缩标准(如JPEG,MPEG,JPEG2000等)的推出与普及也促使人们寻求可以直接在压缩域操作的检索技术,该文对现有的压缩域图像检索技术的发展进行综述,并讨论了未来可能的研究方向。  相似文献   

7.
黄帆 《现代计算机》2006,8(5):75-76,99
目前大部分图像都以压缩的形式来存储和传输,其中以JPEG格式更为流行.检索时对JPEG图像进行完全解压会相当费时.为此,本文提出了一种基于压缩域的JPEG图像快速检索的技术方案,从图像的主要DCT(离散余弦变换)系数中提取关键信息进行计算得到图像的特征向量,并与目标图像的特征向量比较后得出差异度较小的图像作为检索结果.这种检索方法缩短了检索的时间,图像的匹配精度较高.  相似文献   

8.
在国际标准MPEG-7所建议的一种颜色布局描述符基础上,针对JPEG压缩图像,阐述了在DCT压缩域中,从Y、Cb、Cr颜色分量图像的各个8×8子块的直流分量中直接提取颜色布局描述符,以方便于在压缩域中根据颜色的空间分布特征进行快速的图像索引和图像检索。  相似文献   

9.
提出一种结合离散余弦变换(DCT)与GIHS变换的遥感影像融合方法.该方法根据DCT的性质直接在DCT域进行遥感图像融合,适合于实时系统.对现有算法的融合实验结果进行比较,通过主观和客观评价,该方法能在提高空间分辨率与保持光谱特性之间得到更好的折中.  相似文献   

10.
本文给出了一种基于归一化化转动惯量(NormalizedMomentInertia,NMI)的JPEG图像快速检索方法,其特点是直接在压缩域中利用DCT系数进行块分类,每一类分块形成一个二值索引图,统计该索引图的NMI值作为该类的一个特征,所有类的NMI特征构成了图像的一个特征序列,以此进行图像检索。本方法不需要完全解压缩,降低了计算复杂度,对图像的平移,旋转和尺度变换有较好的鲁棒性。试验结果表明这种图像检索方法具有良好的检索性能。  相似文献   

11.
In this paper, some fast feature extraction algorithms are addressed for joint retrieval of images compressed in JPEG and JPEG2000 formats. In order to avoid full decoding, three fast algorithms that convert block-based discrete cosine transform (BDCT) into wavelet transform are developed, so that wavelet-based features can be extracted from JPEG images as in JPEG2000 images. The first algorithm exploits the similarity between the BDCT and the wavelet packet transform. For the second and third algorithms, the first algorithm or an existing algorithm known as multiresolution reordering is first applied to obtain bandpass subbands at fine scales and the lowpass subband. Then for the subbands at the coarse scale, a new filter bank structure is developed to reduce the mismatch in low frequency features. Compared with the extraction based on full decoding, there is more than 72% reduction in computational complexity. Retrieval experiments also show that the three proposed algorithms can achieve higher precision and recall than the multiresolution reordering, especially around the typical range of compression ratio.  相似文献   

12.
基于统计特征的DCT压缩域纹理图像检索方法   总被引:2,自引:1,他引:2  
提出了一种基于离散余弦变换(Discrete Cosine Transfrom,DCT)的纹理图像的检索方法.该方法在DCT压缩域,通过直接对DCT系数计算,获得图像纹理的统计特征,并作为检索的依据.理论分析和实验结果都表明,该方法具有很好的检索准确率和效率,并且对于旋转具有不变性.  相似文献   

13.
As the majority of content-based image retrieval systems operate on full images in pixel domain, decompression is a prerequisite for the retrieval of compressed images. To provide a possible on-line indexing and retrieval technique for those jpg image files, we propose a novel pseudo-pixel extraction algorithm to bridge the gap between the existing image indexing technology, developed in the pixel domain, and the fact that an increasing number of images stored on the Web are already compressed by JPEG at the source. Further, we describe our Web-based image retrieval system, WEBimager, by using the proposed algorithm to provide a prototype visual information system toward automatic management, indexing, and retrieval of compressed images available on the Internet. This provides users with efficient tools to search the Web for compressed images and establish a database or a collection of special images to their interests. Experiments using texture- and colour-based indexing techniques support the idea that the proposed algorithm achieves significantly better results in terms of computing cost than their full decompression or partial decompression counterparts. This technology will help control the explosion of media-rich content by offering users a powerful automated image indexing and retrieval tool for compressed images on the Web.J. Jiang: Contacting author  相似文献   

14.
DCT域图像边缘的快速提取   总被引:4,自引:1,他引:3  
压缩域的图像分析处理技术已成为多媒体研究领域的一个热点。文中给出了DCT压缩域图像边缘的快速检测方法。该方法直接利用DCT非零系数计算图像边缘点,不需要完全解压缩,与传统象素域边缘检测方法相比,大大降低了计算复杂度,并且能根据需要提取不同精度的边缘图像。该方法在远程目标识别或基于边缘的Web图像检索等方面将能满足一定的实时性要求,具有较好的实用价值。针对JPEG图像,给出了边缘提取的实验结果,并与传统的象素域边缘检测方法进行了比较。  相似文献   

15.
In this paper, we propose a novel design of content access and extraction algorithm for compressed image browsing and indexing, which is critical for all visual information systems. By analyzing the relationship between DCT coefficients of one block of 8×8 pixels and its four sub-blocks of 4×4 pixels, the proposed algorithm extract an approximated image with smaller size for indexing and content browsing without incurring full decompression. While the computing cost is significantly lower than full decompression, the approximated image also reserves the content features, which are sufficient for indexing and browsing as evidenced by our extensive experiments.  相似文献   

16.
To improve efficiency of compressed image retrieval, we propose a novel statistical feature extraction algorithm in this paper to characterize the image content directly in its compressed domain. The statistical feature extracted is mainly through computing a set of moments directly from DCT coefficients without involving full decompression or inverse DCT. Following the algorithm design, a content-based image retrieval system is implemented especially targeting retrieving joint picture expert group compressed images. Theoretical analysis and experimental results support that the system is robust to translation, rotation and scale transform with minor disturbance, and the system achieves good performances in terms of retrieval efficiency and effectiveness.  相似文献   

17.
提出了一种基于JPEG图像DCT系数差分矩阵统计特征的隐写分析方法。该算法保留了以往算法选用的DCT系数水平和竖直方向的差分矩阵相关特征,通过增加其主副对角线方向上差分矩阵来提取和计算特征向量,进而利用SVM分类器进行分类。实验结果表明,该方法能够有效地对JPEG图像进行检测,并且具有较高的检测正确率。  相似文献   

18.
张问银  吴尽昭 《计算机工程》2005,31(10):148-149,184
给出了一种基于归一化转动惯量(Normalized Moment Inertia,NMI)的JPEG图像快速检索方法,其特点是直接在压缩域中DCT系数进行块分类,每一类分块形成一个直值索引图,统计该索引图的NMI值作为该类的一个特征,所有类的NMI特征构成了图像的一个特征序列,以此进行图像检索。本方法不需要完全解压缩,降低了计算复杂度,对图像的平移,旋转和尺度变换有较好的鲁棒性,试验结果表明这种图像检索方法具有良好的检索性能。  相似文献   

19.
The quick advance in image/video editing techniques has enabled people to synthesize realistic images/videos conveniently. Some legal issues may arise when a tampered image cannot be distinguished from a real one by visual examination. In this paper, we focus on JPEG images and propose detecting tampered images by examining the double quantization effect hidden among the discrete cosine transform (DCT) coefficients. To our knowledge, our approach is the only one to date that can automatically locate the tampered region, while it has several additional advantages: fine-grained detection at the scale of 8×8 DCT blocks, insensitivity to different kinds of forgery methods (such as alpha matting and inpainting, in addition to simple image cut/paste), the ability to work without fully decompressing the JPEG images, and the fast speed. Experimental results on JPEG images are promising.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号