首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Two dimensional and three dimensional noise reduction techniques are used on real 3D images and compared. The comparison is based on the busyness of the resulting images and on their fidelity to the original images. The following methods, each with 2D and 3D versions, are reviewed: mean filtering, median filtering, nearest neighbor smoothing, selective averaging and maximum likelihood smoothing. The results suggest that the 3D techniques are more effective at removing noise and retaining image information content than the 2D techniques. The methods that produced the highest quality images were the nearest neighbor and maximum likelihood smoothing techniques. The mean and median filtering methods removed the most noise, but blurred the images. The selective averaging method provided intermediate results.  相似文献   

2.
In image processing, image similarity indices evaluate how much structural information is maintained by a processed image in relation to a reference image. Commonly used measures, such as the mean squared error (MSE) and peak signal to noise ratio (PSNR), ignore the spatial information (e.g. redundancy) contained in natural images, which can lead to an inconsistent similarity evaluation from the human visual perception. Recently, a structural similarity measure (SSIM), that quantifies image fidelity through estimation of local correlations scaled by local brightness and contrast comparisons, was introduced by Wang et al. (2004). This correlation-based SSIM outperforms MSE in the similarity assessment of natural images. However, as correlation only measures linear dependence, distortions from multiple sources or nonlinear image processing such as nonlinear filtering can cause SSIM to under- or overestimate the true structural similarity. In this article, we propose a new similarity measure that replaces the correlation and contrast comparisons of SSIM by a term obtained from a nonparametric test that has superior power to capture general dependence, including linear and nonlinear dependence in the conditional mean regression function as a special case. The new similarity measure applied to images from noise contamination, filtering, and watermarking, provides a more consistent image structural fidelity measure than commonly used measures.  相似文献   

3.
Independent component analysis in the blind watermarking of digital images   总被引:3,自引:0,他引:3  
J.J.   《Neurocomputing》2007,70(16-18):2881
We propose a new method for the blind robust watermarking of digital images based on independent component analysis (ICA). We apply ICA to compute some statistically independent transform coefficients where we embed the watermark. The main advantages of this approach are twofold. On the one hand, each user can define its own ICA-based transformation. These transformations behave as “private-keys” of the method. On the other hand, we will show that some of these transform coefficients have white noise-like spectral properties. We develop an orthogonal watermark to blindly detect it with a simple matched filter. We also address some relevant issues as the perceptual masking of the watermark and the estimation of the detection probability. Finally, some experiments have been included to illustrate the robustness of the method to common attacks and to compare its performance to other transform domain watermarking algorithms.  相似文献   

4.
检测噪声图象边缘的概率集群识别方法   总被引:1,自引:0,他引:1  
常寿德 《自动化学报》1990,16(5):436-440
本文提出一种噪声图象中检测边缘的新方法.该方法从图象的灰度概率域人手,将概率谱分为两个等面积的集群,通过判定群间距离检测图象的边缘.理论分析与实验结果表明,这种方法具有较强的噪声抑制能力和边缘提取能力.  相似文献   

5.
Publications such as consumer magazines rely heavily on image libraries as sources for the images they use in their issues. Traditionally, magazine editorial staff have discussed their image requirements over the telephone with library staff and the library has conducted the search. Many libraries have now developed Web sites and their customers search them for images themselves. A minority have e-commerce capabilities, and enable customers to purchase and download digital images from their sites. This survey found that magazine staff do not often choose to search digital libraries, preferring instead to continue to contact the library by telephone. Most also choose not to buy the use of digital images, but prefer to continue to work with conventional transparencies and slides. The reasons for these preferences, and the reasons they are unlikely to change in the short term, are explored.  相似文献   

6.
An image enhancement technique is described for the preprocessing of stained white blood cell images which have been digitized through two different color filters from either end of the visible spectrum. Typically, corresponding picture elements (or pixels) from blood cell images digitized in this manner exhibit slight changes in grey-level due to the color filtering, but remain strongly correlated in optical density with each other. Also, color and density information are interrelated in the pixels of both of the filtered images. The technique described is a whitening transformation on the bivariate distribution of image pixels, this results in two uncorrelated axes, one relating to density and the other relating to color. The spatial effect on the two original images is to produce two separate, transformed, “color” and “density” images.  相似文献   

7.
This paper presents an image lossless compression and information-hiding schemes based on the same methodology. The methodology presented here is based on the known SCAN formal language for data accessing and processing. In particular, the compression part produces a lossless compression ratio of 1.88 for the standard Lenna, while the hiding part is able to embeds digital information at 12.5% of the size of the original image. Results from various images are also provided.  相似文献   

8.
Deformable Registration of Digital Images   总被引:2,自引:0,他引:2       下载免费PDF全文
is paper proposes a novel elastic model and presents a deformable registration method based on the model.The method registers images without the need to extract reatures from the images,and therefore works directly on grey-level images.A new similarity metric is given on which the formation of external forces is based.The registration method,taking the coarse-to-fine strategy,constructs external forces in larger scales for the first few iterations to rely more on global evidence,and ther in smaller scales for later iterations to allow local refinements.The stiffness of the elastic body decreases as the process proceeds.To make it widely applicable,the method is not restricted to any type of transformation.The variations between images are thought as general free-form deformations.Because the elastic model designed is linearized,it can be solved very efficiently with high accuracy.The method has been successfully tested on MRI images.It will certainly find other uses such as matching time-varying sequences of pictures for motion analysis,fitting templates into images for non-rigid object recognition,matching stereo images for shape recovery,etc.  相似文献   

9.
在民币清分系统中,图像分析软件完成人民币图像处理和识别本文讨论在人民币图像处理和识别过程中,涉及到的人民币版本面值摆放方向和号码特征识别算法,为解决系统的实时性和准确性矛盾提供一个参考方案。  相似文献   

10.
Authentication of image data is a challenging task. Unlike data authentication systems that detect a single bit change in the data, image authentication systems must remain tolerant to changes resulting from acceptable image processing or compression algorithms while detecting malicious tampering with the image. Tolerance to the changes due to lossy compression systems is particularly important because in the majority of cases images are stored and transmitted in compressed form, and so it is important for verification to succeed if the compression is within the allowable range.In this paper we consider an image authentication system that generates an authentication tag that can be appended to an image to allow the verifier to verify the authenticity of the image. We propose a secure, flexible, and efficeint image authentication algorithm that is tolerant to image degradation due to JPEG lossy compression within designed levels. (JPEG is the most widely used image compression system and is the de facto industry standard.) By secure we mean that the cost of the best known attack againt the system is high, by flexible we mean that the level of protection can be adjusted so that higher security can be obtained with increased length of the authentication tag, and by efficient we mean that the computation can be performed largely as part of the JPEG compression, allowing the generation of the authentication tag to be efficiently integrated into the compression system. The authentication tag consists of a number of feature codes that can be computed in parallel, and thus computing the tag is effectively equivalent to computing a single feature code. We prove the soundness of the algorithm and show the security of the system. Finally, we give the results of our experiments.  相似文献   

11.
Realistic display of high-dynamic range images is a difficult problem. Previous methods for high-dynamic range image display suffer from halo artifacts or are computationally expensive. We present a novel method for computing local adaptation luminance that can be used with several different visual adaptation-based tone-reproduction operators for displaying visually accurate high-dynamic range images. The method uses fast image segmentation, grouping, and graph operations to generate local adaptation luminance. Results on several images show excellent dynamic range compression, while preserving detail without the presence of halo artifacts. With adaptive assimilation, the method can be configured to bring out a high-dynamic range appearance in the display image. The method is efficient in terms of processor and memory use.  相似文献   

12.
The representation and processing of edges in images based on notions from fuzzy set theory has become popular in recent years. There are several reasons for this direction, from the vague definition of edges to the inherent uncertainty of digital images. Here, we study the transition from a gradient image, a popular intermediate representation, to a fuzzy edge image. We consider different parametric membership functions to transform the gradients into membership degrees. A histogram-based strategy is then introduced for automatically determining the value of those parameters, adapting the membership functions to the characteristics of each image. The functions are applied on the Canny method for edge detection, resulting in an improvement compared to the classical normalizing approach.  相似文献   

13.
The paper is a continuation of a series on the digital geometry of three-dimensional digital images. In earlier reports, D. Morgenthaler and A. Rosenfeld gave symmetric definitions for simple surface points under the concepts of 6-connectivity and 26-connectivity, and they nontrivially characterized a simple closed surface (i.e., a subset of the image which separates its complement into an “inside” and an “outside”) as a connected collection of “orientable” simple surface points. Later, the author and A. Rosenfeld established that the computationally costly assumption of orientability is unnecessary for 6-connectivity by proving that orientability, a local property, is implicitly guaranteed within the (3 × 3 × 3)-neighborhood definition of a 6-connected simple surface point. However, they also showed that no such guarantee exists for 26-connectivity. In this report, the author completes this investigation of simple closed surfaces by showing that orientability is ensured globally by 26-connectivity. Hence, a simple closed surface may be efficiently characterized as a connected collection of simple surface points regardless of the type of connectivity under consideration.  相似文献   

14.
图像分割技术及其在路面开裂损坏识别中的应用   总被引:3,自引:0,他引:3  
王珣 《计算机工程》2003,29(17):117-119
研究了一种基于神经网络的图像分割技术,并将其应用于路面识别。该方法通过分析路面的照片,根据路面图像灰度与纹理的特征来判断路面损坏的类型、面积和严重程度,得出路面状况指数,具有速度快、数据质量高、方便等优点,远远优于目前的人工调查方法。  相似文献   

15.
Specularities often confound algorithms designed to solve computer vision tasks such as image segmentation, object detection, and tracking. These tasks usually require color image segmentation to partition an image into regions, where each region corresponds to a particular material. Due to discontinuities resulting from shadows and specularities, a single material is often segmented into several sub-regions. In this paper, a specularity detection and removal technique is proposed that requires no camera calibration or other a priori information regarding the scene. The approach specifically addresses detecting and removing specularities in facial images. The image is first processed by the Luminance Multi-Scale Retinex [B.V. Funt, K. Barnard, M. Brockington, V. Cardei, Luminance-Based Multi-Scale Retinex, AIC’97, Kyoto, Japan, May 1997]. Second, potential specularities are detected and a wavefront is generated outwards from the peak of the specularity to its boundary or until a material boundary has been reached. Upon attaining the specularity boundary, the wavefront contracts inwards while coloring in the specularity until the latter no longer exists. The third step is discussed in a companion paper [M.D. Levine, J. Bhattacharyya, Removing shadows, Pattern Recognition Letters, 26 (2005) 251–265] where a method for detecting and removing shadows has also been introduced. The approach involves training Support Vector Machines to identify shadow boundaries based on their boundary properties. The latter are used to identify shadowed regions in the image and then assign to them the color of non-shadow neighbors of the same material as the shadow. Based on these three steps, we show that more meaningful color image segmentations can be achieved by compensating for illumination using the Illumination Compensation Method proposed in this paper. It is also demonstrated that the accuracy of facial skin detection improves significantly when this illumination compensation approach is used. Finally, we show how illumination compensation can increase the accuracy of face recognition.  相似文献   

16.
Search-by-content of partially occluded images   总被引:1,自引:0,他引:1  
This paper describes a method for searching images by content that compensates for occlusions in the image. The method extends a search technique proposed by Stone and Li that is based on two criteria – sum of squares differences and average intensity, but the method can be used in conjunction with other criteria, including the normalized correlation coefficient. Image occlusions have a profound impact on template-matching image searches. A typical example of such an occlusion is a cloud over a land mass in an image taken from a satellite. Searches performed without compensation for occlusions are unable to detect matches in positions near the cloud perimeter that are not totally obscured by the cloud. The compensation method introduced here can discover good matches in regions where patterns overlap occluded regions, and would otherwise be missed. The key idea of the algorithm is to perform computations in a way that removes invalid pixels from summations. The enhanced algorithm requires no more than a factor of two increase in storage and computation costs as compared to the Stone–Li algorithm.  相似文献   

17.
In this work we propose an automatic low cost procedure aimed at classifying legume species and varieties based exclusively on the characterization and analysis of the leaf venation network. The identification of leaf venation patterns which are characteristic for each species or variety is not an easy task since in some situations (specially for cultivars from the same species) the vein differences are visually indistinguishable for humans. The proposed procedure takes as input leaf images acquired using a standard scanner, processes the images in order to segment the veins at different scales, and measures different traits on them. We use these features in combination with modern automatic classifiers and feature selection techniques in order to perform recognition. The process was initially applied to recognize three different legumes in order to evaluate the improvements over previous works in the literature, and then it was employed to distinguish three diverse soybean cultivars. The results show the improvements achieved by the usage of the multiscale features. The cultivar recognition is a more challenging problem, since the experts cannot distinguish evident differences in plain sight. However, we achieve acceptable classification results. We also analyze the feature relevance and identify, for each classifier, a small set of distinctive traits to differentiate the species and varieties.  相似文献   

18.
This paper proposes a novel method for document enhancement which combines two recent powerful noise-reduction steps. The first step is based on the Total Variation framework. It flattens background grey-levels and produces an intermediate image where background noise is considerably reduced. This image is used as a mask to produce an image with a cleaner background while keeping character details. The second step is applied to the cleaner image and consists of a filter based on Non-local Means: character edges are smoothed by searching for similar patch images in pixel neighborhoods. The document images to be enhanced are real historical printed documents from several periods which include several defects in their background and on character edges. These defects result from scanning, paper aging and bleed-through. The proposed method enhances document images by combining the Total Variation and the Non-local Means techniques in order to improve OCR recognition. The method is shown to be more powerful than when these techniques are used alone and than other enhancement methods.  相似文献   

19.
描述工程图纸识别技术中的扫描消蓝技术,对几种二值化方法进行了分析和评估,提出了非线性适应算法,微分函数的区域分割算法和基于逻辑分级技术的二值化算法,同时结合建筑结构图的自动识别进行了试验研究,未给出最佳处理方法。而是指出对应不同类型退化的图纸。采用不同的方法或组合形式实现二值化。  相似文献   

20.
公路路面裂缝类病害图像处理算法研究   总被引:1,自引:0,他引:1  
李晋惠 《计算机工程与应用》2003,39(35):212-213,232
文章在对公路路面裂缝类病害的检测过程中,通过认真分析路面病害图像特征,研究了适合于路面病害识别的图像处理算法。由于裂缝类病害的类型包括横向裂缝、纵向裂缝和不规则裂缝,边缘可能在各个角度方向存在梯度,因此,文中构造了8个方向的模板对图像进行Sobel边缘检测。边缘检测处理后,结合加权的邻域平均噪声滤除算法和Ostu图像分割算法对病害图像进行处理,处理结果相对于其他经典算法,裂缝边缘宽度较细(2个像素),并且裂缝的边缘保护很好,裂缝边缘之间断续情况较少。该算法用于对公路路面裂缝类病害的识别检测过程中,检测精度和检测效果都很好。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号