首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
An image watermarking scheme in the 2D DCT domain is proposed by exploring the advantages of using Zernike moments. Zernike transform has been used in image processing applications such as image recognition, authentication, protection, etc. Here, we propose to use the Zernike moments of the DCT transform to provide an efficient watermarking method. Particularly, the novelty of the proposed approach relies on the method for selection of features that will enable both preserving the image quality and robustness to attacks. Also, a criterion for selection of image blocks suitable for watermarking is given. It is based on the ? 1-norm of Zernike moments. The efficiency of the proposed watermarking algorithm is proved on several examples considering different types of attacks (compression, noise, filtering, geometrical attacks).  相似文献   

2.
Denoising filters are useful for reducing noise; however, they often blur and smear the edges and boundaries, which are necessary for segmenting or locating the objects. In order to overcome above problem, many filters with contrast enhancement capability have been developed, and they have wide applications in related fields. Recently, researchers found that the traditional criteria, such as mean squared error (MSE), signal-to-noise ratio (SNR), are not suitable for evaluating such filters.Due to lack of effective metrics for such tasks, visual inspection by human and some newly proposed image quality assessment (QA) criteria, such as structural similarity (SSIM) index are utilized. However, visual inspection depends on the subjectivity of observers heavily.This paper has proved that evaluating denoising filters is different from image quality assessment, i.e., existing image quality assessment criteria cannot effectively evaluate the performance of denoising filters, especially, of the filters having contrast enhancement capability; and new criteria should be established. Further, it proposes a novel objective and effective assessment criterion, homogeneity mean difference (HMD), to evaluate the performance of the filters since it can describe the textual and structural information and/or the changes in textual and structural information well. We have employed 503 images from three databases to demonstrate the superiority of the proposed metric over the existing ones, and to prove that HMD is an effective and useful metric for assessing denoising filters with/without contrast enhancement, which may find wide applications in image processing and computer vision.  相似文献   

3.
The focus of this study is to use Monte Carlo method in fuzzy linear regression. The purpose of the study is to figure out the appropriate error measures for the estimation of fuzzy linear regression model parameters with Monte Carlo method. Since model parameters are estimated without any mathematical programming or heavy fuzzy arithmetic operations in fuzzy linear regression with Monte Carlo method. In the literature, only two error measures (E1 and E2) are available for the estimation of fuzzy linear regression model parameters. Additionally, accuracy of available error measures under the Monte Carlo procedure has not been evaluated. In this article, mean square error, mean percentage error, mean absolute percentage error, and symmetric mean absolute percentage error are proposed for the estimation of fuzzy linear regression model parameters with Monte Carlo method. Moreover, estimation accuracies of existing and proposed error measures are explored. Error measures are compared to each other in terms of estimation accuracy; hence, this study demonstrates that the best error measures to estimate fuzzy linear regression model parameters with Monte Carlo method are proved to be E1, E2, and the mean square error. One the other hand, the worst one can be given as the mean percentage error. These results would be useful to enrich the studies that have already focused on fuzzy linear regression models.  相似文献   

4.

Digital image watermarking has become more popular due to its applications in copyright protection and secret communication. Most of the image watermarking algorithms reported till date involve modification of the host contents for embedding a secret data, leading to a reduced robustness and a limited embedding capacity. In the present work, a novel spatial domain watermarking scheme called Pixel Value Search Algorithm (PVSA) is proposed using a linear search operation to achieve high robustness and a theoretically unlimited embedding capacity. In the proposed scheme, secret data are embedded into a host image by mapping their intensity values into row and column locations. Due to this linear mapping of secret data, the host structural content is not altered. In addition, multiple watermarks can be mapped into a single host image using the PVSA technique. The proposed algorithm is verified using MATLAB® simulations and its performance characteristics are assessed using a standard benchmark tool called strimark. Experimental results illustrate the robustness of the PVSA technique against the attacks of Gaussian blurring, Gaussian noise, salt and pepper noise, Poisson noise, speckle noise, mean and median filtering, histogram equalization, image sharpening, intensity transformation, unsharp filtering, JPEG attack, etc. Subsequently an ASIC implementation of the PVSA algorithm is carried out using Verilog HDL and various modules of the Cadence® EDA tool so as to integrate the chip as a watermark co-processor. The ASIC implementation using a 0.18 μm technology at an operating frequency of 100 MHz consumes a power of 326.34 μW for the complete hardware architecture.

  相似文献   

5.
This paper presents an adaptive block sized reversible image watermarking scheme. A reversible watermarking approach recovers the original image from a watermarked image after extracting the embedded watermarks. Without loss of generality, the proposed scheme segments an image of size 2N × 2N adaptively to blocks of size 2L × 2L, where L starts from a user-defined number to 1, according to their block structures. If possible, the differences between central ordered pixel and other pixels in each block are enlarged to embed watermarks. The embedded quantity is determined by the largest difference in a block and watermarks are embedded into LSB bits of above differences. Experimental results show that the proposed adaptive block size scheme has higher capacity than conventional fixed block sized method.  相似文献   

6.

This paper has presented two image processing applications, compression and watermarking, by exploiting the localization property of wavelet transform. The first application proposes a simple region-based and scalable image compression algorithm. We locate the wavelet coefficients in the region of interest in each subband, and these groups of wavelet coefficients are used to adjust the resolution of the interested region. A watermarking method is described in the second application of this paper. The scheme examines the variations of the local Hölder regularity of the image and calculates the similarity of the correct watermark before and after modifications. Experimental results show that the proposed approach is quite effective in authenticating the origin of an image.  相似文献   

7.
In image processing, image similarity indices evaluate how much structural information is maintained by a processed image in relation to a reference image. Commonly used measures, such as the mean squared error (MSE) and peak signal to noise ratio (PSNR), ignore the spatial information (e.g. redundancy) contained in natural images, which can lead to an inconsistent similarity evaluation from the human visual perception. Recently, a structural similarity measure (SSIM), that quantifies image fidelity through estimation of local correlations scaled by local brightness and contrast comparisons, was introduced by Wang et al. (2004). This correlation-based SSIM outperforms MSE in the similarity assessment of natural images. However, as correlation only measures linear dependence, distortions from multiple sources or nonlinear image processing such as nonlinear filtering can cause SSIM to under- or overestimate the true structural similarity. In this article, we propose a new similarity measure that replaces the correlation and contrast comparisons of SSIM by a term obtained from a nonparametric test that has superior power to capture general dependence, including linear and nonlinear dependence in the conditional mean regression function as a special case. The new similarity measure applied to images from noise contamination, filtering, and watermarking, provides a more consistent image structural fidelity measure than commonly used measures.  相似文献   

8.
Wavelet domain statistical models have been shown to be useful for certain applications, e.g. image compression, watermarking and Gaussian noise reduction. One of the main problems for wavelet-based compression is to overcome quantisation error efficiently. Inspired by Weber–Fechners Law, we introduce a logarithmic model that approximates the non-linearity of human perception and partially precompensates for the effect of the display device. A logarithmic transfer function is proposed in order to spread the coefficients distribution in the wavelet domain in compliance with the human perceptual attributes. The standard deviation σ of the logarithmically-scaled coefficients in a subband represents the average difference from the mean of the coefficients in that subband. The standard deviation is chosen as a measure of the visibility threshold within this subband. Computing the values of σ’s for all subbands results in a quantisation matrix for a chosen image. The quantisation matrix is then scaled by a factor ρ in order to provide the best trade-off between the visual quality and the bit-rate of the processed image. A major advantage of this model is to allow for observing the visibility threshold and automatically produce the quantisation matrix that is content dependant and scalable without further interaction from the user. The experimental results have proven the model works for any wavelet.  相似文献   

9.
Image fusion is considered an effective enhancing methodology widely included in high-quality imaging systems. Nevertheless, like other enhancing techniques, output quality assessment is made within small sample subjective evaluation studies which are very limited in predicting the human-perceived quality of general image fusion outputs. Simple, blind, universal and perceptual-like methods for assessing composite image quality are still a challenge, partially solved only in particular applications. In this paper, we propose a fidelity measure, called MS-QW with two major characteristics related to natural image statistics framework: A multi-scale computation and a structural similarity score. In our experiments, we correlate the scores of our measure with subjective ratings and state of the art measures included in the 2015 Waterloo IVC multi-exposure fusion (MEF) image database. We also use the measure to rank correctly the classical general fusion methods included in the Image Fusion Toolbox for medical, infra-red and multi-focus image examples. Moreover, we study the scores variability and statistical discrimination power with the TNO night vision database using the Friedman test. Finally, we define a new leave one out procedure based on our fidelity measure that selects the best subset of images (within a collection of distorted and unregistered cell phone type images) that provides a defect-free composite output. We exemplify the procedure with the fusion of a collection of images from Latour and Van Dongen paintings suffering from glass highlights and speckle noise, among other artifacts. The proposed multiscale quality measure MS-QW demonstrates improvement over the previous single-scale similarity measures towards a fidelity assessment between quantitative image fusion quality metrics and human perceptual qualitative scores.  相似文献   

10.
目的 在栅格地理数据的使用过程中,为防止数据被破坏或被篡改,需要加强对数据完整性的检验;为防止数据被恶意传播,需要加强对数据版权信息的保护。双重水印技术可以同时完成这两项任务。方法 利用基于异或的(2,2)-视觉密码方案VCS(visual cryptography scheme)和离散小波变换DWT(discrete wavelet transform),对数字栅格地理数据嵌入双重水印,使用半脆弱性水印作为第1重水印进行完整性检验,水印信息依据DWT变换后高频系数中水平分量之间的大小关系嵌入;使用零水印作为第2重水印进行版权保护,提取DWT变换后经低频子带奇异值分解的特征值生成特征份,利用基于异或的(2,2)-VCS,根据特征份和水印信息生成版权份。结果 为验证算法的有效性,对具体的栅格地理数据进行实验分析。结果表明,本文算法中第1重水印能够正确区分偶然攻击和恶意破坏,对含水印的栅格地理数据进行质量因子为90、80、70、60、50的JPEG压缩后,提取出完整性水印的归一化相关系数NC(normalized correlation)值分别是1、0.996、0.987、0.9513、0.949,在定位裁剪攻击时,能准确地定位到篡改的位置,对于定位替换攻击时,能定位到篡改的大致位置;第2重水印具有良好的视觉效果和较强的鲁棒性,对含水印的栅格地理数据进行滤波攻击、JPEG压缩、裁剪攻击、缩放攻击等性能测试,提取出版权水印的NC值优于其他方案。结论 论文基于异或的(2,2)-VCS和DWT提出的栅格地理数据双重水印算法,在实现数据完整性检验的同时达到了版权保护的目的。  相似文献   

11.
This article proposes a novel dimension reduction method of independent component analysis for process monitoring based on minimum mean square error (MSE). Firstly, the order of the independent components (ICs) is ranked according to their importance estimated by MSE, and the mathematical proof is presented. Secondly, the top-n ICs are selected as dominant components and the dimension of ICs is reduced. The sum of the squared independent scores (I2) and the squared prediction error (SPE) are adopted as monitoring statistics. The control limits of I2 and SPE are determined by the kernel density estimation (KDE). The proposed dimension reduction method is applied to fault detection in a simple multivariate process and the simulation benchmark of Tennessee Eastman process. Finally, two fault conditions of pulverizing system in power plant are analyzed by the proposed method. The experiments results verify the effectiveness of the proposed method.  相似文献   

12.
New model-based estimators of the uncertainty of pixel-level and areal k-nearest neighbour (knn) predictions of attribute Y from remotely-sensed ancillary data X are presented. Non-parametric functions predict Y from scalar ‘Single Index Model’ transformations of X. Variance functions generated estimates of the variance of Y. Three case studies, with data from the Forest Inventory and Analysis program of the U.S. Forest Service, the Finnish National Forest Inventory, and Landsat ETM+ ancillary data, demonstrate applications of the proposed estimators. Nearly unbiased knn predictions of three forest attributes were obtained. Estimates of mean square error indicate that knn is an attractive technique for integrating remotely-sensed and ground data for the provision of forest attribute maps and areal predictions.  相似文献   

13.
借助LDPC码提高数字水印鲁棒性   总被引:2,自引:0,他引:2       下载免费PDF全文
根据数字水印系统与通信系统模型的等效性,本文提出了一种基于LDPC码的鲁棒性图像数字水印方案。该之置采用LDPC码作为水印信道编码,对图像水印传输进行差错控制, 把用心码编码后的水印序列嵌入原始图像的频域,在提取水印时采用LPDC码的迭代译码算法。仿真实验结果表明,该方案降低了水印在传输过程中的误码率,提高了水印的抗 抗攻击能力,具有很好的鲁棒性和视觉不可见性。  相似文献   

14.
目的 现有水印算法大多是基于明文域的,很容易被入侵、窃取。为了保护用户隐私、提高安全性,本文提出了一种用于盗版追踪的基于格雷码加密域的可逆水印方法,该方法支持对密文直接操作。方法 首先提出了基于格雷码的同态加密系统(HESGC),并以此加密载体图像;然后依据整数小波变换(IWT)和人类视觉系统(HVS)特性,将图像分区并合理分类;再依据新提出的算法完成嵌入、可逆恢复及提取工作;最后利用首次提出的水印追踪联合策略(JWT)来进行盗版追踪。结果 为了验证本文方法,选取USC-SIPI图像库中的6幅经典图像作为标准测试图像,与其他可逆水印算法相比,本文方法具有更高的PSNR值,PSNR高达50 dB,而且SSIM值均为1,实现了可逆功能;本文新提出的HESGC将使原始载体图像膨胀为原来的8倍,故容量较大。理论上,本文最大容量为3.75 bit/像素,目前大多可逆水印算法的最大容量不足1 bit/像素;本文方法不仅实现了盗版追踪功能,而且能够抵抗一些常见的攻击,如随机噪声、中值滤波、图像平滑和JPEG编码、LZW编码和卷积模糊等。通过比较原始追踪证明与攻击后图像的追踪证明可知,相似度在1左右的即为盗版,其他非盗版的相似度都远远低于1,大部分在0.6左右。结论 本文提出了一种基于密文域的可逆水印方案,首次提出了HESGC和JWT,实现了密文域可逆水印技术和盗版追踪功能。该方案直接采用灰度图像作为水印图像,解除了以往以二值图像作为水印图像,或者将灰度图像二值化后作为水印图像的限制,而且采用基于级联混沌技术提高了灰度水印图像的安全性。此外,本文成功消除了图像分区分类中纹理/平滑区域中的平滑/纹理孤岛,使分类结果更加准确、合理。实验结果表明,本方案不仅能够抵抗一些常见攻击,而且容量大、安全性高,很好地保护了用户隐私。本文实现了密文域可逆水印技术,适用于隐私保护要求高的医学、军事等领域。  相似文献   

15.
This paper investigates the use of wavelet ensemble models for high performance concrete (HPC) compressive strength forecasting. More specifically, we incorporate bagging and gradient boosting methods in building artificial neural networks (ANN) ensembles (bagged artificial neural networks (BANN) and gradient boosted artificial neural networks (GBANN)), first. Coefficient of determination (R2), mean absolute error (MAE) and the root mean squared error (RMSE) statics are used for performance evaluation of proposed predictive models. Empirical results show that ensemble models (R2BANN=0.9278, R2GBANN=0.9270) are superior to a conventional ANN model (R2ANN=0.9088). Then, we use the coupling of discrete wavelet transform (DWT) and ANN ensembles for enhancing the prediction accuracy. The study concludes that DWT is an effective tool for increasing the accuracy of the ANN ensembles (R2WBANN=0.9397, R2WGBANN=0.9528).  相似文献   

16.
This paper studies the problem of networked H filtering for linear discrete-time systems. A new model is proposed as the filtering error system to simultaneously capture the communication constraint, random packet dropout and quantization effects in the networked systems. A sufficient condition is presented for the filtering error system to be mean square exponentially stable with a prescribed H performance by employing the multiple Lyapunov function method. The obtained condition depends on some parameters of the networked systems, such as the access sequence of nodes, packet dropout rate and quantization density. With these parameters fixed, a design procedure for the desired H filter is also presented based on the derived condition. Finally, an illustrative example is utilized to show the effectiveness of the proposed method.  相似文献   

17.
We investigate the exponential stability and L2‐gain analysis for the synchronization of stochastic complex networks under average dwell time switched topology with consideration of external disturbance, internal noise and fast time‐varying delay in the synchronized process. Based on the proposed stochastic network, a new L2‐gain synchronization is proposed to solve the mean‐square exponential stable under switched topology with an H performance from the extrinsic disturbances to the synchronization error. The obtained results are applicable for the fast time‐varying case with larger‐than‐1 delay derivative. Finally, numerical simulations are performed to demonstrate the effectiveness of our strategies.  相似文献   

18.
为了提高智能电表芯片图像的字符识别精度,需要消除芯片图像中的噪声,以减小干扰;文章提出了一种基于二维变分模态分解算法(2D-VMD)与非局部均值(NLM)滤波的芯片图像去噪算法;首先利用2D-VMD将含有噪声信号的芯片图像分解为K个模态分量;然后根据提出的结构相似(SSIM)阈值设置方法确定噪声分量并将其去除,使用剩余的有效分量重构图像;最后通过非局部均值滤波算法对重构后的图像进行处理,进一步滤除残余噪声,达到二次去噪的效果;实验结果表明,相比传统的图像去噪算法,提出的算法能在较好保留原始芯片图像的字符信息的基础上,去除不相关的噪声干扰,使去噪后的芯片图像的均方误差值变小,峰值信噪比增大,提高芯片图像质量.  相似文献   

19.
Image watermarking has emerged as a useful method for solving security issues like authenticity, copyright protection and rightful ownership of digital data. Existing watermarking schemes use either a binary or grayscale image as a watermark. This paper proposes a new robust and adaptive watermarking scheme in which both the host and watermark are the color images of the same size and dimension. The security of the proposed watermarking scheme is enhanced by scrambling both color host and watermark images using Arnold chaotic map. The host image is decomposed by redundant discrete wavelet transform (RDWT) into four sub-bands of the same dimension, and then approximate sub-band undergoes singular value decomposition (SVD) to obtain the principal component (PC). The scrambled watermark is then directly inserted into a principal component of scrambled host image, using an artificial bee colony optimized adaptive multi-scaling factor, obtained by considering both the host and watermark image perceptual quality to overcome the tradeoff between imperceptibility and robustness of the watermarked image. The hybridization of RDWT-SVD provides an advantage of no shift-invariant to achieve higher embedding capacity in the host image and preserving the imperceptibility and robustness by exploiting SVD properties. To measure the imperceptibility and robustness of the proposed scheme, both qualitative and quantitative evaluation parameters like peak signal to noise ratio (PSNR), structural similarity index metric (SSIM) and normalized cross-correlation (NC) are used. Experiments are performed against several image processing attacks and the results are analyzed and compared with other related existing watermarking schemes which clearly depict the usefulness of the proposed scheme. At the same time, the proposed scheme overcomes the major security problem of false positive error (FPE) that mostly occurs in existing SVD based watermarking schemes.  相似文献   

20.
Fungal growth leads to spoilage of food and animal feeds and to formation of mycotoxins and potentially allergenic spores. There is a growing interest in modelling microbial growth as an alternative to time-consuming, traditional, microbiological enumeration techniques. Several statistical models have been reported to describe the growth of different micro-organisms however the nature of neural networks, as highly non-linear approximator schemes, considers them as an alternative methodology. The application of neural networks in predictive microbiology is presented in this paper. This technique was used to build up a model of the joint effect of water activity, pH level and temperature to predict the maximum specific growth rate of the ascomycetous fungus Monascus ruber. Neural network and polynomial models were compared against the experimental data using six statistical indices namely, coefficient of determination (R2), root mean square error (RMSE), mean relative percentage error (MRPE), mean absolute percentage error (MAPE), standard error of prediction (SEP), bias (Bf) and accuracy (Af) factors. Graphical plots were also used for model comparison. The performance of the learning-based systems provide encouraging results while sensitivity analysis showed that from the three environmental factors the most influential on fungal growth was temperature, followed by water activity and pH to a lesser extend. Neural networks offer an alternative and powerful technique to model microbial kinetic parameters and could thus become an additional tool in predictive mycology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号