首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Land cover mapping from multi-spectral satellite data is based primarily on spectral differences in land cover categories. Since only a limited number of cover types are desired in most cases, the images contain redundant information which unnecessarily complicates the digital mapping process. In this study, we have devised an algorithm to automatically and reproducibly quantize an image to be classified into a reduced number of digital levels, in most cases without a visually perceptible reduction in the image information content. The Flexible Histogram Quantization (FHQ) algorithm assumes that the histogram has one or two major peaks (representing water and/or land) and that most of the information of interest is in one peak. It aims to provide a sufficient quantization in the main peak of interest as well as in the tails of this peak by computing an optimized number of quantized levels and then identifying the range of digital values belonging to each level. A comparison of the FHQ with four existing quantization algorithms showed that the FHQ retained substantially more radiometric discrimination than histogram normalization, linear quantization, and scaling methods. Using a random sample of Landsat TM images and an AVHRR coverage of Canada, the average quantization error for the FHQ was 1.68 digital levels for an entire scene and 1.41 for land pixels only. Based on the 34 single-band test images included in the comparison, the radiometric resolution was reduced from 255 to 23.3 levels on the average, or by a factor of 10.94 for a multi-spectral image with n spectral bands. Compared to the other quantization methods, FHQ had a higher efficiency (by 65% to 148%), except for histogram equalization. FHQ also retained more information than histogram equalization (by 11%) but more importantly, it provided finer resolution in the tails of the main histogram peak (by 36-664%, depending on the position in the tails) for infrequent but potentially important land cover types. In addition, unlike the other methods the FHQ does not require a user-specified number of levels and therefore its results are fully reproducible. The F HQ can be used with single scenes, with radiometrically seamless mosaics, or when classifying radiometrically incompatible adjacent scenes. It is concluded that the FHQ provides an effective means for image quantization, as an automated pre-processing step in land cover mapping applications.  相似文献   

2.
提出了一种新的图像特征表示方法,首先提取图像的底层颜色信息获取颜色特征值 ,通过对图像中物体的边缘检测计算像素点的边缘方向角度值,并对颜色特征值和边缘方向 角度值进行量化。然后根据相邻像素点之间量化结果的数值分析,为每个像素点建立8维特 征向量。再以中心像素点与相邻像素点间不同的位置关系为基础,为每种位置关系赋予不同 的权重,根据像素点的特征向量计算出图像中每一个像素点的特征值。最后统计图像中具有 相同特征值的像素点个数,形成特征直方图,以此作为图像检索的依据。实验表明本文方法 能够有效描述图像的颜色分布和图像中物体的空间结构,更加细致地记录图像信息,进一步 增强图像之间的区分能力。与其他方法相比,本文方法检索效果更好。  相似文献   

3.
4.
Image retrieval based on color histograms requires quantization of a color space. Uniform scalar quantization of each color channel is a popular method for the reduction of histogram dimensionality. With this method, however, no spatial information among pixels is considered in constructing the histograms. Vector quantization (VQ) provides a simple and effective means for exploiting spatial information by clustering groups of pixels. We propose the use of Gauss mixture vector quantization (GMVQ) as a quantization method for color histogram generation. GMVQ is known to be robust for quantizer mismatch, which motivates its use in making color histograms for both the query image and the images in the database. Results show that the histograms made by GMVQ with a penalized log-likelihood (LL) distortion yield better retrieval performance for color images than the conventional methods of uniform quantization and VQ with squared error distortion.  相似文献   

5.
We consider a class of convex functionals that can be seen as $\mathcal{C}^{1}$ smooth approximations of the ? 1-TV model. The minimizers of such functionals were shown to exhibit a qualitatively different behavior compared to the nonsmooth ? 1-TV model (Nikolova et al. in Exact histogram specification for digital images using a variational approach, 2012). Here we focus on the way the parameters involved in these functionals determine the features of the minimizers  $\hat{u}$ . We give explicit relationships between the minimizers and these parameters. Given an input digital image f, we prove that the error $\|\hat{u}- f\| _{\infty}$ obeys $b-\varepsilon\leq\|\hat{u}-f\|_{\infty}\leq b$ where b is a constant independent of the input image. Further we can set the parameters so that ε>0 is arbitrarily close to zero. More precisely, we exhibit explicit formulae relating the model parameters, the input image f and the values b and ε. Conversely, we can fix the parameter values so that the error $\|\hat{u}- f\|_{\infty}$ meets some prescribed b,ε. All theoretical results are confirmed using numerical tests on natural digital images of different sizes with disparate content and quality.  相似文献   

6.
7.

In present digital era, multimedia like images, text, documents and videos plays a vital role, therefore due to increase in usage of digital data; there comes high demand of security. Encryption is a technique used to secure and protect the images from unfair means. In cryptography, chaotic maps play an important role in forming strong and effective encryption algorithm. In this paper 3D chaotic logistic map with DNA encoding is used for confusion and diffusion of image pixels. Additionally, three symmetric keys are used to initialize 3D chaos logistic map, which makes the encryption algorithm strong. The symmetric keys used are 32 bit ASCII key, Chebyshev chaotic key and prime key. The algorithm first applies 3D non-linear logistic chaotic map with three symmetric keys in order to generate initial conditions. These conditions are then used in image row and column permutation to create randomness in pixels. The third chaotic sequence generated by 3D map is used to generate key image. Diffusion of these random pixels are done using DNA encoding; further XOR logical operation is applied between DNA encoded input image and key image. Analysis parameters like NPCR, UACI, entropy, histogram, chi-square test and correlation are calculated for proposed algorithm and also compared with different existing encryption methods.

  相似文献   

8.
This paper proposes a reversible secret-image sharing scheme for sharing a secret image among 2n shadow images with high visual quality (i.e., they are visually indistinguishable from their original images, respectively). In the proposed scheme, not only can the secret image be completely revealed, but the original cover images can also be losslessly recovered. A difference value between neighboring pixels in a secret image is shared by 2n pixels in 2n shadow images, respectively, where n?≥?1. A pair of shadow images which are constructed from the same cover image are called brother stego-images. To decrease pixel values changed in shadow images, each pair of brother stego-images is assigned a weighted factor when calculating difference values to be shared. A pixel in a cover image is recovered by calculating the average of corresponding pixels in its brother stego-images. A single stego-image reveals nothing and a pair of pixels in brother stego-images reveals partial difference value between neighboring secret pixels. The more brother stego-images are collected, the more information in the secret image will be revealed. Finally, a secret image will be completely revealed if all of its brother stego-images are collected.  相似文献   

9.
Color Space Quantization for Color-Content-Based Query Systems   总被引:2,自引:0,他引:2  
Color histograms are widely used in most of color content-based image retrieval systems to represent color content. However, the high dimensionality of a color histogram hinders efficient indexing and matching. To reduce histogram dimension with the least loss in color content, color space quantization is indispensable. This paper highlights and emphasizes the importance and the objectives of color space quantization. The color conservation property is examined by investigating and comparing different clustering techniques in perceptually uniform color spaces and for different images. For studying color spaces, perceptually uniform spaces, such as the Mathematical Transformation to Munsell system (MTM) and the C.I.E. L*a*b*, are investigated. For evaluating quantization approaches, the uniform quantization, the hierarchical clustering, and the Color-Naming-System (CNS) supervised clustering are studied. For analyzing color loss, the error bound, the quantized error in color space conversion, and the average quantized error of 400 color images are explored. A color-content-based image retrieval application is shown to demonstrate the differences when applying these clustering techniques. Our simulation results suggest that good quantization techniques lead to more effective retrieval.  相似文献   

10.
This paper presents a scheme and its Field Programmable Gate Array (FPGA) implementation for a system based on combining the bi-dimensional discrete wavelet transformation (2D-DWT) and vector quantization (VQ) for image compression. The 2D-DWT works in a non-separable fashion using a parallel filter structure with distributed control to compute two resolution levels. The wavelet coefficients of the higher frequency sub-bands are vector quantized using multi-resolution codebook and those of the lower frequency sub-band at level two are scalar quantized and entropy encoded. VQ is carried out by self organizing feature map (SOFM) neural nets working at the recall phase. Codebooks are quickly generated off-line using the same nets functioning at the training phase. The complete system, including the 2D-DWT, the multi-resolution codebook VQ, and the statistical encoder, was implemented on a Xilinx Virtex 4 FPGA and is capable of performing real-time compression for digital video when dealing with grayscale 512 × 512 pixels images. It offers high compression quality (PSNR values around 35 dB) and acceptable compression rate values (0.62 bpp).
Javier Diaz-CarmonaEmail:
  相似文献   

11.
This paper studies the detection of Least Significant Bits (LSB) steganography in digital media by using hypothesis testing theory. The main goal is threefold: first, it is aimed to design a test whose statistical properties are known, this especially allows the guaranteeing of a false alarm probability. Second, the quantization of samples is studied throughout this paper. Lastly, the use of a linear parametric model of samples is used to estimate unknown parameters and design a test which can be used when no information on cover medium is available. To this end, the steganalysis problem is cast within the framework of hypothesis testing theory and digital media are considered as quantized signals. In a theoretical context where media parameters are assumed to be known, the Likelihood Ratio Test (LRT) is presented. Its statistical performances are analytically established; this highlights the impact of quantization on the most powerful steganalyzer. In a practical situation, when image parameters are unknown, a Generalized LRT (GLRT) is proposed based on a local linear parametric model of samples. The use of such model allows us to establish GLRT statistical properties in order to guarantee a prescribed false-alarm probability. Focusing on digital images, it is shown that the well-known WS (Weighted-Stego) is close to the proposed GLRT using a specific model of cover image. Finally, numerical results on natural images show the relevance of theoretical findings.  相似文献   

12.
基于矢量量化和区域生长的彩色图像分割新算法   总被引:3,自引:1,他引:3       下载免费PDF全文
针对光照变化和阴影对图像分割的不利影响问题,提出了一种基于矢量量化和区域生长的彩色图像分割新算法。该算法不仅考虑了彩色图像的颜色信息,而且也考虑了彩色图像的空间信息。该算法首先利用一种修改的GLA算法对彩色图像进行量化,并根据彩色图像量化的结果选取种子像素;然后基于矢量角相似性准则,并结合像素空间邻接信息,对每一个种子像素进行区域生长;最后利用模糊C-M eans算法来对未能归类的剩余像素进行分类。实验表明,该算法不仅可以在很大程度上克服光照变化及阴影对图像分割的不利影响,而且分割结果与人的主观视觉感知具有良好的一致性。  相似文献   

13.
In this paper a content-based image retrieval method that can search large image databases efficiently by color, texture, and shape content is proposed. Quantized RGB histograms and the dominant triple (hue, saturation, and value), which are extracted from quantized HSV joint histogram in the local image region, are used for representing global/local color information in the image. Entropy and maximum entry from co-occurrence matrices are used for texture information and edge angle histogram is used for representing shape information. Relevance feedback approach, which has coupled proposed features, is used for obtaining better retrieval accuracy. A new indexing method that supports fast retrieval in large image databases is also presented. Tree structures constructed by k-means algorithm, along with the idea of triangle inequality, eliminate candidate images for similarity calculation between query image and each database image. We find that the proposed method reduces calculation up to average 92.2 percent of the images from direct comparison.  相似文献   

14.
图像增强是数字图像的预处理,能有效地改善图像的整体或局部特征。直方图规定化是图像增强领域的一个重要方面。该文研究并探讨了直方图规定化的基本原理,给出了相关推导公式和算法;并且,以一个灰度图像为例,用Maflab语言工具实现了直方图规定化增强处理,给出并分析了实验结果。实验结果表明,直方图规定化能有选择地对桌灰度范围进行局部的对比度增强,从而得到期望的增强图像。  相似文献   

15.
Information hiding is an important research issue in digital life. In this paper, we propose a two-stage data hiding method with high capacity and good visual quality based on image interpolation and histogram modification techniques. At the first stage, we first generate a high-quality cover image using the developed enhanced neighbor mean interpolation and then take the difference values from input and cover pixels as a carrier to embed secret data. In this stage, our proposed scheme raises the image quality a lot due to the ENMI method. At the second stage, a histogram modification method is applied on the difference image to further increase the embedding capacity and preserve the image quality without distortion. Experimental results indicate that the proposed method have better PSNR value of stego-image with improving 43 % on the average when compared the past key-studies.  相似文献   

16.
The application of gray-scale digitizers to digitization of binary images of straight-edged planar shilhouettes is considered. A measure of digitization-induced ambiguity is introduced. It is shown that if the gray levels are not quantized and the spatial sampling resolution is sufficiently high, error-free reconstruction of the original binary image from the digitized image is possible. When the total bit-count for the representation of the digitized image is limited, i.e., sampling resolution and quantization accuracy are both finite, error-free reconstruction is usually impossible. In this case a bit allocation problem arises, and it is shown that the sensible bit allocation policy is to increase the quantization accuracy as much as possible once a “sufficient” spatial sampling resolution has been reached.  相似文献   

17.
Comparing images using joint histograms   总被引:11,自引:0,他引:11  
Color histograms are widely used for content-based image retrieval due to their efficiency and robustness. However, a color histogram only records an image's overall color composition, so images with very different appearances can have similar color histograms. This problem is especially critical in large image databases, where many images have similar color histograms. In this paper, we propose an alternative to color histograms called a joint histogram, which incorporates additional information without sacrificing the robustness of color histograms. We create a joint histogram by selecting a set of local pixel features and constructing a multidimensional histogram. Each entry in a joint histogram contains the number of pixels in the image that are described by a particular combination of feature values. We describe a number of different joint histograms, and evaluate their performance for image retrieval on a database with over 210,000 images. On our benchmarks, joint histograms outperform color histograms by an order of magnitude.  相似文献   

18.
为了提高可逆数字水印的安全性和透明性,增加嵌入容量,提出了一种基于公钥的可逆数字水印。该方法首先对载体图像直方图中峰值点与左右两侧的零值点之间的像素点进行移位,然后提取载体图像的特征值,将该特征值与经过混沌系统加密的数字水印进行异或处理后,采用公钥将其嵌入到处理后的载体图像内。图像的验证过程是嵌入过程的逆过程,验证完成后,根据峰值点及其与零值点之间的关系将移位的像素点复原,即可完全复原原始图像。采用公钥系统和混沌系统充分保证了系统的安全性,峰值点与其两侧的零值点之间的像素移位既保证了能够嵌入更多的信息和较高的峰值信噪比,又保证了所有的像素点都能被认证。通过对大量的图像进行仿真分析,结果显示该方法具有较高的安全性,与同类方法相比,能够嵌入更多的信息量,同时具有更高的透明性。  相似文献   

19.
郭芳侠  王晅  陈伟伟 《计算机工程》2009,35(16):130-132
提出一种基于严格直方图规定化的抗几何攻击数字水印算法。该算法基于图像块信息熵与边缘检测结果,选择图像最大的平坦区域,利用严格直方图规定化将图像平坦区域的直方图规定化为特定形状,为了提高水印检测精度与嵌入图像的保真度,选择锯齿状直方图作为水印信息。实验表明,该算法嵌入水印后的图像具有很好的保真度,对几何攻击、噪声污染、JPEG压缩、线性和非线性滤波有较好的鲁棒性。  相似文献   

20.
A method for unsupervised segmentation of color-texture regions in images and video is presented. This method, which we refer to as JSEG, consists of two independent steps: color quantization and spatial segmentation. In the first step, colors in the image are quantized to several representative classes that can be used to differentiate regions in the image. The image pixels are then replaced by their corresponding color class labels, thus forming a class-map of the image. The focus of this work is on spatial segmentation, where a criterion for “good” segmentation using the class-map is proposed. Applying the criterion to local windows in the class-map results in the “J-image,” in which high and low values correspond to possible boundaries and interiors of color-texture regions. A region growing method is then used to segment the image based on the multiscale J-images. A similar approach is applied to video sequences. An additional region tracking scheme is embedded into the region growing process to achieve consistent segmentation and tracking results, even for scenes with nonrigid object motion. Experiments show the robustness of the JSEG algorithm on real images and video  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号