首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

The existing image authentication methods for absolute moment block truncation coding (AMBTC) modify the bits of quantitation levels or bitmaps to embed the authentication code (AC). However, the modification of the bits in these methods is equivalent to the LSB replacement, which may introduce undesirable distortions. Besides, the modification of bitmap for embedding AC reduces the image quality significantly, especially at image edges. Moreover, the existing methods might not be able to detect some special modifications to the marked image. In this paper, we propose an efficient authentication method for the AMBTC compressed image. AC is obtained from the bitmap and the location information, and is embedded into the quantization levels using the adaptive pixel pair matching (APPM) technique. Since the bitmap is unchanged and the APPM embedment is efficient, a high image quality can be achieved. The experimental results reveal that the proposed method not only significantly reduces the distortion caused by embedding but also provides a better authentication result when compared to the prior state-of-art works.

  相似文献   

2.
Blocking effect caused by coefficient quantification during image compression is an annoying problem. The main effect of quantification is to eliminate high frequency component in the image. Therefore, it will lead to noticeable discontinuous leaps which causes block effect and degrade the image. A deblocking method based on Curvelet transform is proposed in this paper. Based on the fact that the Curvelet coefficients in different scale layers correspond to different block effect of the degraded image, our method adaptively process the coefficients in each layer to recover the degraded images. As proved by the experiments, our method can retain more details and get better recovery results under both subjective and objective criterions than traditional spatial domain and wavelet deblocking methods.  相似文献   

3.
Algorithms for manipulating compressed images   总被引:10,自引:0,他引:10  
A family of algorithms that implement operations on compressed digital images is described. These algorithms allow many traditional image manipulation operations to be performed 50 to 100 times faster than their brute-force counterparts. It is shown how the algebraic operations of pixel-wise and scalar addition and multiplication, which are the basis for many image transformations, can be implemented on compressed images. These operations are used to implement two common video transformations: dissolving one video sequence into another and subtitling. The performance of these operations is compared with the brute-force approach. The limitations of the technique, extensions to other compression standards and the relationship of this research to other work in the area are discussed  相似文献   

4.
It is well known that at low bit rates, a block-based discrete cosine transform compressed image or video can exhibit visually annoying blocking and ringing artifacts. Low-pass filters are very effective in reducing the blocking artifacts in smooth areas. However, it is difficult to achieve a satisfactory result for ringing artifact removal using only an adaptive filtering scheme. This paper presents a neural network-based deblocking method that is effective on various types of images. The first step of this scheme is block classification that identifies each 8 × 8 block as one of the three types: PLAIN, EDGE or TEXTURE, based on its statistical characteristics. The next step is the reduction in the blocking and ringing artifacts by applying three trained layered neural networks to three different types of image areas. Comparing this method with other algorithms, the simulation results clearly show that the proposed algorithm is very powerful in effectively reducing both blocking and ringing artifacts while preserving the true edge and textural information and thus significantly improving the visual quality of the blocking images or videos.  相似文献   

5.
The compressed sensing (CS) theorem is a novel sampling approach that breaks through the conventional Nyquist sampling limit and brings a revolution in the field of signal processing. This article investigates the compression technique for CS hyperspectral images so as to illustrate the superiority provided by this new theorem. First, several comparative experiments are used to reveal that the drawback of prior compression techniques, designed for the data acquired by the conventional hyperspectral imaging system, is either low compression ratio or a waste of sampling resource. After a condensed analysis, we state that the CS theorem provides the probability of avoiding such defects. Then a straightforward scheme, which takes advantage of spectral correlation, is proposed to compress the CS hyperspectral images to reduce the data size further. Moreover, a flexible recovery strategy is designed to speed up the reconstruction of original bands from the corresponding CS images. The experimental results based on the actual hyperspectral images have demonstrated the efficiency of this proposed technique.  相似文献   

6.
With the emergence of digital libraries, more and more documents are stored and transmitted through the Internet in the format of compressed images. It is of significant meaning to develop a system which is capable of retrieving documents from these compressed document images. Aiming at the popular compression standard-CCITT Group 4 which is widely used for compressing document images, we present an approach to retrieve the documents from CCITT Group 4 compressed document images in this paper. The black and white changing elements are extracted directly from the compressed document images to act as the feature pixels, and the connected components are detected simultaneously. Then the word boxes are bounded based on the merging of the connected components. Weighted Hausdorff distance is proposed to assign all of the word objects from both the query document and the document from database to corresponding classes by an unsupervised classifier, whereas the possible stop words are excluded. Document vectors are built by the occurrence frequency of the word object classes, and the pair-wise similarity of two document images is represented by the scalar product of the document vectors. Nine groups of articles pertaining to different domains are used to test the validity of the presented approach. Preliminary experimental results with the document images captured from students’ theses show that the proposed approach has achieved a promising performance.  相似文献   

7.
8.
Digital image forensics is required to investigate unethical use of doctored images by recovering the historic information of an image. Most of the cameras compress the image using JPEG standard. When this image is decompressed and recompressed with different quantization matrix, it becomes double compressed. Although in certain cases, e.g. after a cropping attack, the image can be recompressed with the same quantization matrix too. This JPEG double compression becomes an integral part of forgery creation. The detection and analysis of double compression in an image help the investigator to find the authenticity of an image. In this paper, a two-stage technique is proposed to estimate the first quantization matrix or steps from the partial double compressed JPEG images. In the first stage of the proposed approach, the detection of the double compressed region through JPEG ghost technique is extended to the automatic isolation of the doubly compressed part from an image. The second stage analyzes the doubly compressed part to estimate the first quantization matrix or steps. In the latter stage, an optimized filtering scheme is also proposed to cope with the effects of the error. The results of proposed scheme are evaluated by considering partial double compressed images based on the two different datasets. The partial double compressed datasets have not been considered in the previous state-of-the-art approaches. The first stage of the proposed scheme provides an average percentage accuracy of 95.45%. The second stage provides an error less than 1.5% for the first 10 DCT coefficients, hence, outperforming the existing techniques. The experimental results consider the partial double compressed images in which the recompression is done with different quantization matrix.  相似文献   

9.
Multimedia Tools and Applications - A novel removable visible watermarking (RVW) algorithm by combining Block Truncation Coding (BTC) and chaotic map (RVWBCM) is presented in this paper. It embeds...  相似文献   

10.
Reversible hiding in DCT-based compressed images   总被引:2,自引:0,他引:2  
This paper presents a lossless and reversible steganography scheme for hiding secret data in each block of quantized discrete cosine transformation (DCT) coefficients in JPEG images. In this scheme, the two successive zero coefficients of the medium-frequency components in each block are used to hide the secret data. Furthermore, the scheme modifies the quantization table to maintain the quality of the stego-image. Experimental results also confirm that the proposed scheme can provide expected acceptable image quality of stego-images and successfully achieve reversibility.  相似文献   

11.
对大数据量遥感图像融合,常规融合方法需考虑图像所有像素点,而全局压缩采样融合重构计算成本高、存储需求大。首先利用分块压缩感知(BCS)对输入图像进行压缩采样,再对压缩测量采用线性加权策略融合,最后采用迭代阈值投影(ITP)重构算法重构融合图像,并消除分块效应。提出了一种基于BCS的遥感图像融合方法,并给出其详细实现流程。仿真结果表明了ITP算法计算成本低、重构精度高。实际资料测试表明BCS融合方法与常规小波加权融合结果相比,除了平均梯度有所差别外,在平均值、标准差和信息熵等定量分析和视觉特征上基本相同。该算法用较少采样点实现有效压缩融合,存储需求小、重构成本低,融合决策过程简单,有利于大数据量遥感图像的融合。  相似文献   

12.
Multimedia Tools and Applications - Protecting the security of information transmission over the Internet has become a critical contemporary issue. Compressed images are now widely used in mobile...  相似文献   

13.
This paper concerns color image restoration aiming at objective quality improvement of compressed color images in general rather than merely artifact reduction. In compressed color images, colors are usually represented by luminance and chrominance components. Considering characteristics of human vision system, chrominance components are generally represented more coarsely than luminance component. To recover such chrominance components, we previously proposed a model-based chrominance restoration algorithm where color images are modeled by a Markov random field. This paper presents a color image restoration algorithm derived by the MAP estimation, where all components are totally estimated. Experimental results show that the proposed restoration algorithm is more effective than the previous one.  相似文献   

14.
Analysis of small test fragments or compact artifacts is essential for many color correction problems. An efficient method of analysis is the recognition of compact artifacts. However, pattern recognition by features requires the determination of significant features for each applied problem. An alternative approach to the recognition of compact artifacts, which requires no feature extraction and is based on systems of iterated functions and comparison of their attractors, is proposed.  相似文献   

15.
Digital fingerprinting could trace the data source of illegal distribution effectively. Most existing algorithms are only adapted to uncompressed images, whose application fields are limited. In the paper a digital fingerprinting algorithm based on non-subsampled contourlet transform (NSCT) for compressed images is proposed. It is devoted to high capacity and strong robustness for compressed images fingerprinting. The NSCT low frequency coefficients of compressed images are more suitable for hiding information than DCT coefficients, and they are used to construct the high dimension host vector to hide Gaussian fingerprints. Through increasing the dimension of the host vector, on one hand the fingerprinting capacity improves fundamentally, on the other hand the ability of anti-collusion attack enhances greatly. Large experimental results shown that the proposed algorithm proves the declared performance compared with the existing algorithms.  相似文献   

16.
Multimedia Tools and Applications - Analyzing multimedia data in mobile devices is often constrained by limited computing capacity and power storage. Therefore, more and more studies are trying to...  相似文献   

17.
18.
An algorithm based on local order statistics is proposed for adaptive reduction of speckle noise in synthetic aperture radar (SAR) images. A selective smoothing is obtained by replacing a pixel value belonging to either of the tails of the local histogram by its percentile, whose area is adaptively defined by a Gaussian function of the Local Variation Coefficient. The filter can fit the actual noise level and preserves structures, textures, and point targets, as well as the local mean without introducing any blur on the edges, mostly due to its closure property. Comparisons with algorithms suitable for speckle smoothing are performed on true SAR images and show selective signal-to-noise ratio (SNR) enhancements.  相似文献   

19.
基于DCT压缩的JPEG图像的快速检索   总被引:5,自引:0,他引:5  
卞国春  张曦煌 《计算机应用》2005,25(7):1623-1625
提出了一种基于离散余弦变换(DCT)压缩的JPEC图像的检索方法。该方法利用JPEG图像数据在DCT压缩域的特性,直接提取特征,而且只需要对JPEC进行部分熵解码。在加速了图像检索的过程的同时也保证了检索结果的精确性,并且具有一定的鲁棒性。  相似文献   

20.
Multimedia Tools and Applications - Image dependence is increasing for information sharing. In image forensics, the detection of median filtering is challenging on low-resolution and highly...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号