首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Here, we propose an automatic system to annotate and retrieve images. We assume that regions in an image can be described using a vocabulary of blobs. Blobs are generated from image features using clustering. Features are locally extracted on regions to capture Color, Texture and Shape information. Regions are processed by an efficient segmentation algorithm. Images are structured into a region adjacency graph to consider spatial relationships between regions. This representation is used to perform a similarity search into an image set. Hence, the user can express his need by giving a query image, and thereafter receiving as a result all similar images. Our graph based approach is benchmarked to conventional Bag of Words methods. Results tend to reveal a good behavior in classification of our graph based solution on two publicly available databases. Experiments illustrate that a structural approach requires a smaller vocabulary size to reach its best performance.  相似文献   

2.
A polyhedral object in three-dimensional space is often well represented by a set of points and line segments that act as its features. By a nice perspective projection of an object we mean a projection that gives an image in which the features of the object, relevant for some task, are visible without ambiguity. In this paper we consider the problem of computing a variety of nice perspective projections of three-dimensional objects such as simple polygonal chains, wire-frame drawings of graphs, and geometric rooted trees. These problems arise in areas such as computer vision, computer graphics, graph drawing, knot theory, and computational geometry.  相似文献   

3.
李世飞  王平  沈振康 《信号处理》2010,26(3):375-380
利用扩散滤波进行图像降噪的过程中,一个核心问题是,如何控制扩散系数,使得模型在图像信息位置停止扩散,而在噪声处有效地扩散。为了更好地解决此问题,本文采用了一种新的思想,把图像看作是三维空间的一个曲面,这样可以得到图像曲面的两个基本特性:高斯曲率和平均曲率。为了能够在图像进行扩散滤波处理中有效地利用图像在三维空间中的这些曲面特性,文章分析了已有的基于平均曲率或高斯曲率的非线性扩散滤波模型,总结了平均曲率和高斯曲率的特点,并在此基础上,提出了基于混合曲率的扩散滤波模型;该模型作为一种新的基于曲面特性的图像扩散滤波模型,同时利用了图像的高斯曲率和平均曲率,恰当地融合了两种曲率的特点,能够以相对较快的速度滤除噪声,同时保持图像的细节特征。   相似文献   

4.
PDE-based image restoration: a hybrid model and color image denoising.   总被引:7,自引:0,他引:7  
The paper is concerned with PDE-based image restoration. A new model is introduced by hybridizing a nonconvex variant of the total variation minimization (TVM) and the motion by mean curvature (MMC) in order to deal with the mixture of the impulse and Gaussian noises reliably. We suggest the essentially nondissipative (ENoD) difference schemes for the MMC component to eliminate the impulse noise with a minimum (ideally no) introduction of dissipation. The MMC-TVM hybrid model and the ENoD schemes are applied for both gray-scale and color images. For color image denoising, we consider the chromaticity-brightness decomposition with the chromaticity formulated in the angle domain. An incomplete Crank-Nicolson alternating direction implicit time-stepping procedure is adopted to solve those differential equations efficiently. Numerical experiments have shown that the new hybrid model and the numerical schemes can remove the mixture of the impulse and Gaussian noises, efficiently and reliably, preserving edges quite satisfactorily.  相似文献   

5.
In this paper, we concentrate on a challenging problem, i.e., weakly supervised image parsing, whereby only weak image-level labels are available in the dataset. In tradition, an affinity graph of superpixels is constructed to strengthen weak information by leveraging the neighbors from the perspective of image-level labels. Existing work constructs the affinity graph by purely utilizing the visual relevance, where the context homogenization is a common phenomenon and hinders the performance of label prediction. To overcome the context homogenization problem, we not only consider the visual and semantic relevance but also the semantic distinction between every target superpixel and its neighbor superpixels in the affinity graph construction. We propose a novel way in constructing the inter-image contextual graph, and design a label propagation framework jointly combining visual relevance, semantic relevance and discriminative ability. Extensive experiments on real-world datasets demonstrate that our approach obtains significant gains.  相似文献   

6.
一种改进的图谱阈值分割算法   总被引:1,自引:1,他引:0  
针对图像分割是典型的结构不良问题,将图谱划分理论作为一种新型的模式分析工具应用到图像分割并引起广大学者关注。考虑到现有的图谱阈值法中图权计算方法采用基于欧氏距离的幂指数函数导致其计算量过大的不足,首先采用基于欧氏距离的分式型柯西函数代替基于欧氏距离的幂指数函数提出图权计算的新方法,其次将其应用基于图谱划分测度的图像阈值分割算法中并得到一种改进的图谱阈值分割方法。实验结果表明,该方法的计算量小且对目标和背景相差比例较大的图像能获得满意的结果。  相似文献   

7.
We present an extension of the random walker segmentation to images with uncertain gray values. Such gray-value uncertainty may result from noise or other imaging artifacts or more general from measurement errors in the image acquisition process. The purpose is to quantify the influence of the gray-value uncertainty onto the result when using random walker segmentation. In random walker segmentation, a weighted graph is built from the image, where the edge weights depend on the image gradient between the pixels. For given seed regions, the probability is evaluated for a random walk on this graph starting at a pixel to end in one of the seed regions. Here, we extend this method to images with uncertain gray values. To this end, we consider the pixel values to be random variables (RVs), thus introducing the notion of stochastic images. We end up with stochastic weights for the graph in random walker segmentation and a stochastic partial differential equation (PDE) that has to be solved. We discretize the RVs and the stochastic PDE by the method of generalized polynomial chaos, combining the recent developments in numerical methods for the discretization of stochastic PDEs and an interactive segmentation algorithm. The resulting algorithm allows for the detection of regions where the segmentation result is highly influenced by the uncertain pixel values. Thus, it gives a reliability estimate for the resulting segmentation, and it furthermore allows determining the probability density function of the segmented object volume.  相似文献   

8.
In this work, we have developed a computer-aided diagnosis system, based on a two-level artificial neural network (ANN) architecture. This was trained, tested, and evaluated specifically on the problem of detecting lung cancer nodules found on digitized chest radiographs. The first ANN performs the detection of suspicious regions in a low-resolution image. The input to the second ANN are the curvature peaks computed for all pixels in each suspicious region. This comes from the fact that small tumors possess and identifiable signature in curvature-peak feature space, where curvature is the local curvature of the image data when viewed as a relief map. The output of this network is thresholded at a chosen level of significance to give a positive detection. Tests are performed using 60 radiographs taken from routine clinic with 90 real nodules and 288 simulated nodules. We employed free-response receiver operating characteristics method with the mean number of false positives (FP's) and the sensitivity as performance indexes to evaluate all the simulation results. The combination of the two networks provide results of 89%-96% sensitivity and 5-7 FP's/image, depending on the size of the nodules.  相似文献   

9.
We consider the geometric random graph where n points are distributed independently on the unit interval [0, 1] according to some probability distribution function F with density f. Two nodes are adjacent (i.e., communicate with each other) if their distance is less than some transmission range. We survey results, some classical and some recently obtained by the authors, concerning the existence of zero-one laws for graph connectivity, the type of zero-one laws under the specific assumptions made, the form of its critical scaling and its dependence on the density f. We also present results and conjectures concerning the width of the corresponding phase transition. Engineering implications are discussed for power allocation.  相似文献   

10.
Distributed Downlink Beamforming With Cooperative Base Stations   总被引:1,自引:0,他引:1  
In this paper, we consider multicell processing on the downlink of a cellular network to accomplish “macrodiversity” transmit beamforming. The particular downlink beamformer structure we consider allows a recasting of the downlink beamforming problem as a virtual linear mean square error (LMMSE) estimation problem. We exploit the structure of the channel and develop distributed beamforming algorithms using local message passing between neighboring base stations. For 1-D networks, we use the Kalman smoothing framework to obtain a forward–backward beamforming algorithm. We also propose a limited extent version of this algorithm that shows that the delay need not grow with the size of the network in practice. For 2-D cellular networks, we remodel the network as a factor graph and present a distributed beamforming algorithm based on the sum–product algorithm. Despite the presence of loops in the factor graph, the algorithm produces optimal results if convergence occurs.   相似文献   

11.
We consider multiamplitude, multitrack runlength-limited (d, k) constrained channels with and without clock redundancy. We calculate the Shannon capacities of these channels and present some simple 100% efficient codes. To compute capacity a constraint graph equivalent to the usual runlength-limited constraint graph is used. The introduced graph model has the vertex labeling independent of number of tracks to be written on (in parallel), which provides computational savings when the number of tracks is large. We show that increasing the number of tracks written on in parallel provides significant increase of per-track capacity for the more restrictive clocking constraint case, i.e., when k相似文献   

12.
The problem of semi-automatic segmentation has attracted much interest over the last few years. The Random Walker algorithm [1] has proven to be quite a popular solution to this problem, as it is able to deal with several components and models the image using a convenient graph structure. We propose two improvements to the image graph used by the Random Walker method. First, we propose a new way of computing the edge weights. Traditionally, such weights are based on the similarity between two neighbouring pixels, using their greyscale intensities or colours. We substitute a new definition of weights based on the probability distributions of colours. This definition is much more robust than traditional measures, as it allows for textured objects, and objects that are composed of multiple perceptual components. Second, the traditional graph has a vertex set which is the set of pixels and edges between each pair of neighbouring pixels. We substitute a smaller, irregular graph based on Mean Shift oversegmentation. This new graph is typically several orders of magnitude smaller than the original image graph, which can lead to a major savings in computing time. We show results demonstrating the substantial improvement achieved when using the proposed image graph.  相似文献   

13.
一种基于曲率变分正则化的小波变换图像去噪方法   总被引:2,自引:0,他引:2       下载免费PDF全文
周先春  吴婷  石兰芳  陈铭 《电子学报》2018,46(3):621-628
噪声和图像的细节特征主要集中于图像高频部分,在图像去噪过程中,图像的某些重要特征(如边缘、细小纹理等)易受到破坏.针对这一情况,本文提出基于曲率变分正则化的小波变换图像去噪方法,首先用小波提取图像的高频成分,对图像进行增强处理,然后用增强图像的水平集曲率建立一个基于水平集曲率的曲率驱动函数,再将曲率驱动函数作为一个校正因子引入到变分模型中,建立曲率变分模型,用以控制图像的整体结构.在缺乏图像梯度信息的情况下,该模型克服了ROF模型错误扩散这一缺点,符合图像处理的形态学原则.最后,用建立的曲率变分模型处理提取的高频成分,重构处理后的高频成分和原来的低频成分,得到去噪后的图像.分析和仿真结果表明,新算法可有效抑制噪声,有极高的图像结构相似度,去噪效果明显.  相似文献   

14.
We present a unified approach to noise removal, image enhancement, and shape recovery in images. The underlying approach relies on the level set formulation of the curve and surface motion, which leads to a class of PDE-based algorithms. Beginning with an image, the first stage of this approach removes noise and enhances the image by evolving the image under flow controlled by min/max curvature and by the mean curvature. This stage is applicable to both salt-and-pepper grey-scale noise and full-image continuous noise present in black and white images, grey-scale images, texture images, and color images. The noise removal/enhancement schemes applied in this stage contain only one enhancement parameter, which in most cases is automatically chosen. The other key advantage of our approach is that a stopping criteria is automatically picked from the image; continued application of the scheme produces no further change. The second stage of our approach is the shape recovery of a desired object; we again exploit the level set approach to evolve an initial curve/surface toward the desired boundary, driven by an image-dependent speed function that automatically stops at the desired boundary.  相似文献   

15.
A graph-spectral approach to shape-from-shading   总被引:1,自引:0,他引:1  
In this paper, we explore how graph-spectral methods can be used to develop a new shape-from-shading algorithm. We characterize the field of surface normals using a weight matrix whose elements are computed from the sectional curvature between different image locations and penalize large changes in surface normal direction. Modeling the blocks of the weight matrix as distinct surface patches, we use a graph seriation method to find a surface integration path that maximizes the sum of curvature-dependent weights and that can be used for the purposes of height reconstruction. To smooth the reconstructed surface, we fit quadrics to the height data for each patch. The smoothed surface normal directions are updated ensuring compliance with Lambert's law. The processes of height recovery and surface normal adjustment are interleaved and iterated until a stable surface is obtained. We provide results on synthetic and real-world imagery.  相似文献   

16.
为提高全变分图像降噪模型的降噪性能和边缘保持性能,该文提出一种曲率差分驱动的极小曲面滤波器。首先,在平均曲率滤波器模型基础上,引入自适应曲率差分边缘探测函数,建立曲率差分驱动的极小曲面滤波器模型;接着,从微分几何理论角度,阐述该能量泛函模型的物理意义和平均曲率能量减小方法;最后,在离散的图像域,通过迭代的方式使图像每个像素邻域内的曲面向极小曲面迭代进化,实现能量泛函的平均曲率能量极小化,从而能量泛函的总能量也完成极小化。实验表明,该滤波器不仅能去除高斯噪声、椒盐噪声,还能去除这两类噪声构成的混合噪声,其降噪性能和边缘保持性能优于同类型的其他5种全变分算法。  相似文献   

17.

This work introduces the three-dimensional steerable discrete cosine transform (3D-SDCT), which is obtained from the relationship between the discrete cosine transform (DCT) and the graph Fourier transform of a signal on a path graph. One employs the fact that the basis vectors of the 3D-DCT constitute a possible eigenbasis for the Laplacian of the product of such graphs. The proposed transform employs a rotated version of the 3D-DCT basis. We then evaluate the applicability of the 3D-SDCT in the field of 3D medical image compression. We consider the case where we have only one pair of rotation angles per block, rotating all the 3D-DCT basis vectors by the same pair. The obtained results show that the 3D-SDCT can be efficiently used in the referred application scenario and it outperforms the classical 3D-DCT.

  相似文献   

18.
提出了一种基于图论的偏微分方程(PDE)图像去噪方法。在构造图的拓扑结构过程中,引入了小世界模型,降低图的直径,加快算法的收敛速度。同时,评估了图的权重函数中最优参数的选取。最后,用图的拉普拉斯矩阵和图上的热扩散方程实现图像的去噪。仿真实验结果表明,本文提出的方法能够有效去除高斯噪声,较完整地保持图像中的边缘等细节信息,在去噪性能和算法收敛速率上优于其它的PDE去噪方法。  相似文献   

19.
The parameters of the prior, the hyperparameters, play an important role in Bayesian image estimation. Of particular importance for the case of Gibbs priors is the global hyperparameter, beta, which multiplies the Hamiltonian. Here we consider maximum likelihood (ML) estimation of beta from incomplete data, i.e., problems in which the image, which is drawn from a Gibbs prior, is observed indirectly through some degradation or blurring process. Important applications include image restoration and image reconstruction from projections. Exact ML estimation of beta from incomplete data is intractable for most image processing. Here we present an approximate ML estimator that is computed simultaneously with a maximum a posteriori (MAP) image estimate. The algorithm is based on a mean field approximation technique through which multidimensional Gibbs distributions are approximated by a separable function equal to a product of one-dimensional (1-D) densities. We show how this approach can be used to simplify the ML estimation problem. We also show how the Gibbs-Bogoliubov-Feynman (GBF) bound can be used to optimize the approximation for a restricted class of problems. We present the results of a Monte Carlo study that examines the bias and variance of this estimator when applied to image restoration.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号