共查询到20条相似文献,搜索用时 15 毫秒
1.
Kostadin Koroutchev Author Vitae Elka Korutcheva Author Vitae 《Pattern recognition》2009,42(8):1684-1692
The purpose of this paper is to introduce an algorithm that can detect the most unusual part of a digital image in probabilistic setting. The most unusual part of a given shape is defined as a part of the image that has the maximal distance to all non-intersecting shapes with the same form. The method is tested on two- and three-dimensional images and has shown very good results without any predefined model. A version of the method independent of the contrast of the image is considered and is found to be useful for finding the most unusual part (and the most similar part) of the image conditioned on given image.The results can be used to scan large image databases, as for example medical databases. 相似文献
2.
This article investigates and compiles some of the techniques mostly used in the smoothing or suppression of speckle noise in ultrasound images. With this information, a comparison of all the methods studied is done based on an experiment, using quality metrics to test their performance and show the benefits each one can contribute. To test the methods, a synthetic, noise-free image of a kidney is created and later simulations using Field II program to corrupt it are performed. This way, the smoothing techniques can be compared using numeric metrics, taking the noise-free image as a reference. Since real ultrasound images are already noise corrupted images and real noise-free images do not exist, conventional metrics cannot be used to indicate the quality obtained with filtering. Nevertheless, we propose the use of the tendencies observed in our study in real images. 相似文献
3.
三维块匹配(BM3D)去噪是当前去噪性能最好的算法之一。但由于时间复杂度较高,而且需要输入精确的图像噪声水平参数,极大地限制该算法的广泛应用。因此,文中首先采用基于网格的块匹配策略,提出快速三维块匹配(FBM3D)算法。然后提出基于迭代的盲图像噪声水平估计算法,由SVM学习算法确定迭代的初始值,再由图像质量判定迭代是否终止。测试实验表明,与原始的BM3D算法相比,该算法在计算效率、视觉感知效果和定量评测方面均有明显改善。 相似文献
4.
A neural network approach to the extraction of 1hop F-layer traces from oblique-incidence ionograms is shown to offer performance at least comparable with conventional filtering techniques. Preprocessing in the form of background noise and vertical (horizontal) line removal is utilised prior to training a 1107100 MLP using backpropagation with momentum. It is further demonstrated that such successful trace extraction can be achieved with just 50 ionogram training exemplars. 相似文献
5.
In image processing, image similarity indices evaluate how much structural information is maintained by a processed image in relation to a reference image. Commonly used measures, such as the mean squared error (MSE) and peak signal to noise ratio (PSNR), ignore the spatial information (e.g. redundancy) contained in natural images, which can lead to an inconsistent similarity evaluation from the human visual perception. Recently, a structural similarity measure (SSIM), that quantifies image fidelity through estimation of local correlations scaled by local brightness and contrast comparisons, was introduced by Wang et al. (2004). This correlation-based SSIM outperforms MSE in the similarity assessment of natural images. However, as correlation only measures linear dependence, distortions from multiple sources or nonlinear image processing such as nonlinear filtering can cause SSIM to under- or overestimate the true structural similarity. In this article, we propose a new similarity measure that replaces the correlation and contrast comparisons of SSIM by a term obtained from a nonparametric test that has superior power to capture general dependence, including linear and nonlinear dependence in the conditional mean regression function as a special case. The new similarity measure applied to images from noise contamination, filtering, and watermarking, provides a more consistent image structural fidelity measure than commonly used measures. 相似文献
6.
A new scheme for constructing a 3D individualized head model automatically from only a side view and the front view of the face is presented. The approach instantiates a generic 3D head model based on a set of the individual's facial features extracted by a local maximum-curvature tracking (LMCT) algorithm that we have developed. A distortion vector field that deforms the generic model to that of the individual is computed by correspondence matching and interpolation. The input of the two facial images are blended and texture-mapped onto the 3D head model. Arbitrary views of a person can be generated from two orthogonal images and can be implemented efficiently on a low-cost, PC-based platform. 相似文献
7.
根据脉冲噪声的特点,利用检测窗口内像素灰度值的统计信息,自适应地将数字图像中的噪声点检测出来,滤波算法只对噪声点进行处理,用噪声点邻域内所有信号点去极值后的平均值作为噪声点的滤波输出,实验结果表明该算法的滤波性能和计算速度都明显好于常用的中值滤波,具有良好的实用价值. 相似文献
8.
We propose a new method for the blind robust watermarking of digital images based on independent component analysis (ICA). We apply ICA to compute some statistically independent transform coefficients where we embed the watermark. The main advantages of this approach are twofold. On the one hand, each user can define its own ICA-based transformation. These transformations behave as “private-keys” of the method. On the other hand, we will show that some of these transform coefficients have white noise-like spectral properties. We develop an orthogonal watermark to blindly detect it with a simple matched filter. We also address some relevant issues as the perceptual masking of the watermark and the estimation of the detection probability. Finally, some experiments have been included to illustrate the robustness of the method to common attacks and to compare its performance to other transform domain watermarking algorithms. 相似文献
9.
该文对传统的高斯图像噪声的实现从高斯特性和噪声率等方面提出了两种改进方法,并给出按改进方法和传统方法产生的高斯噪声图像的直方图。改进方法较传统方法有高斯特性更准确,噪声率(图像中受噪声影响的像素数与总像素数之比)可调,“白”性不变等特点,更适于成为图像处理研究的噪声源。 相似文献
10.
S. K. Parui B. Uma Shankar A. Mukherjee D. Dutta Majumder 《Pattern recognition letters》1991,12(12):765-770
To enhance linear structures in a gray level image, local operations with an additive score are normally used. Here a multiplicative score is used instead which gives better results than the additive one. The problem of segmenting the image of the multiplicative score is then dealt with where the threshold value can be automatically selected. The experimental results on some satellite images are reported. 相似文献
11.
The three-dimensional reconstruction of heart arteries, using two radiographic images, is an ambiguous problem and results in multiple solutions. The correct solution can only be identified using a priori information. This work proposes a method to determine the number of possible solutions, which may be useful for the development of algorithms to solve this ambiguity problem. Our method proceeds by generating and counting all the possible solutions. Using real data, it was found that the number of solutions is moderate and an extensive search could be a feasible option at certain levels of the search process. The currently available computation power should be able to handle this solution space. 相似文献
12.
Ismael Baeza José-Antonio Verdoy Rafael-Jacinto Villanueva Javier Villanueva-Oller 《Image and vision computing》2010
In this paper, we propose an algorithm for lossy adaptive encoding of digital three-dimensional (3D) images based on singular value decomposition (SVD). This encoding allows us to design algorithms for progressive transmission and reconstruction of the 3D image, for one or several selected regions of interest (ROI) avoiding redundancy in data transmission. The main characteristic of the proposed algorithms is that the ROIs can be selected during the transmission process and it is not necessary to re-encode the image again to transmit the data corresponding to the selected ROI. An example with a data set of a CT scan consisting of 93 parallel slices where we added an implanted tumor (the ROI in this example) and a comparative with JPEG2000 are given. 相似文献
13.
In this work a new multisecret sharing scheme for secret color images among a set of users is proposed. The protocol allows that each participant in the scheme to share a secret color image with the rest of participants in such a way that all of them can recover all the secret color images only if the whole set of participants pools their shadows. The proposed scheme is based on the use of bidimensional reversible cellular automata with memory. The security of the scheme is studied and it is proved that the protocol is ideal and perfect and that it resists the most important statistical attacks. 相似文献
14.
Stina Svensson Gabriella Sanniti di Baja 《Computer Vision and Image Understanding》2003,90(3):242-257
The curve skeleton of a 3D solid object provides a useful tool for shape analysis tasks. In this paper, we use a recent skeletonization algorithm based on voxel classification that originates a nearly thin, i.e., at most two-voxel thick, curve skeleton. We introduce a novel way to compress the nearly thin curve skeleton to one-voxel thickness, as well as an efficient pruning algorithm able to remove unnecessary skeleton branches without causing excessive loss of information. To this purpose, the pruning condition is based on the distribution of significant elements along skeleton branches. The definition of significance depends on the adopted skeletonization algorithm. In our case, it is derived from the voxel classification used during skeletonization. 相似文献
15.
C. Mariño M. G. Penedo M. Penas M. J. Carreira F. Gonzalez 《Pattern Analysis & Applications》2006,9(1):21-33
Traditional authentication (identity verification) systems, used to gain access to a private area in a building or to data stored in a computer, are based on something the user has (an authentication card, a magnetic key) or something the user knows (a password, an identification code). However, emerging technologies allow for more reliable and comfortable user authentication methods, most of them based on biometric parameters. Much work could be found in the literature about biometric-based authentication, using parameters like iris, voice, fingerprints, face characteristics, and others. In this work a novel authentication method is presented and preliminary results are shown. The biometric parameter employed for the authentication is the retinal vessel tree, acquired through retinal digital images, i.e., photographs of the fundus of the eye. It has already been asserted by expert clinicians that the configuration of the retinal vessels is unique for each individual and that it does not vary during his life, so it is a very well-suited identification characteristic. Before the verification process can be executed, a registration step is required to align both the reference image and the picture to be verified. A fast and reliable registration method is used to perform this step, so that the whole authentication process takes about 0.3 s. 相似文献
16.
S.S. Maniccam Author Vitae Author Vitae 《Pattern recognition》2004,37(3):475-486
This paper presents an image lossless compression and information-hiding schemes based on the same methodology. The methodology presented here is based on the known SCAN formal language for data accessing and processing. In particular, the compression part produces a lossless compression ratio of 1.88 for the standard Lenna, while the hiding part is able to embeds digital information at 12.5% of the size of the original image. Results from various images are also provided. 相似文献
17.
Yueh-Ling Lin Mao-Jiun J. Wang 《Expert systems with applications》2012,39(5):5012-5018
Constructing 3D human model from 2D images provides a cost-effective approach to visualize digital human in virtual environment. This paper presents a systematic approach for constructing 3D human model using the front and side images of a person. The silhouettes of human body are first detected and the feature points on the silhouettes are subsequently identified. The feature points are further used to obtain the body dimensions that are necessary for identifying a template 3D human model. The shape of the template human model can be modified by the free-form deformation method. Moreover, the proposed approach has been applied for constructing the 3D human models of 30 subjects. The comparisons between the constructed 3D models and the 3D scanning models of the 30 subjects indicate that the proposed system is very effective and robust. 相似文献
18.
19.
Noise removal from color images 总被引:1,自引:0,他引:1
The noise effects in color images are studied from the human perception and machine perception point of view. Three justifiable observations are made to illustrate problems related to individual color signal processing. To minimize the noise effects, two solutions are studied: One is a rental scheme and the other is a vector signal processing technique. The rental scheme adopts filters originally developed for grey scale images to color images. A set of heuristic criteria is defined to reconstruct an output with minimum artifacts. The vector signal processing technique utilizes a median vector filter based on the well developed median filter for grey scale images. Since the output of the filter does not have the same physical meaning as the median defined in one-dimensional space, the search of a vector median is considered as a minimum problem. The output is guaranteed to be one of the inputs. Both approaches are shown to be very effective in removing speckle noise. Results from real and synthetic images are obtained and compared. 相似文献
20.
This paper investigates the application of variations of Stochastic Relaxation with Annealing (SRA) as proposed by Geman and Geman [1] to the Bayesian restoration of binary images corrupted by white noise. After a general review we present some specific prior models and show examples of their application. It appears that a proper selection of the prior model is critical for the success of the method. We obtained better results on artificial images which fitted the model closely than on real images for which there was no precise model. 相似文献