共查询到20条相似文献,搜索用时 15 毫秒
1.
作为图像识别与图像理解的关键步骤,图像分割一直受到人们的重视,很多相应的算法被提出,但它也面临着很多挑战。医学图像分割的难点是对模糊边缘的连续有效分割,为准确的目标提取提供保障。提出一种新的医学图像分割算法,算法在拉普拉斯水平集图像分割算法基础上,融入图像的区域信息,重新定义了驱动水平集表面演化的速度函数。算法除了利用图像的边缘梯度信息外,还充分融合了图像的区域信息,从而在保持图像边缘局部特征的同时,充分利用了区域全局优化的特点,可实现医学图像的有效分割。与经典水平集分割方法相比,改进后的方法能够更好地保持边界的连续性,得到比较完整的分割结果,为图像分析提供可靠的科学数据。 相似文献
2.
A correction is made and an elementary derivation is given of Clark's (ibid., vol.11, p.43-57, 1989) decomposition of the Laplacian Δ2f (x ,y ) into a second directional derivative in the gradient direction of f , and a product of gradient magnitude by curvature of the level curve through f (x ,y ) 相似文献
3.
阶梯网络不仅是一种基于深度学习的特征提取器,而且能够应用于半监督学习中.深度学习在实现了复杂函数逼近的同时,也缓解了多层神经网络易陷入局部最小化的问题.传统的自编码、玻尔兹曼机等方法易忽略高维数据的低维流形结构信息,使用这些方法往往会获得无意义的特征表示,这些特征不能有效地嵌入到后续的预测或识别任务中.从流形学习的角度出发,提出一种基于阶梯网络的深度表示学习方法,即拉普拉斯阶梯网络LLN(Laplacian ladder network).拉普拉斯阶梯网络在训练的过程中不仅对每一编码层嵌入噪声并进行重构,而且在各重构层引入图拉普拉斯约束,将流形结构嵌入到多层特征学习中,以提高特征提取的鲁棒性和判别性.在有限的有标签数据情况下,拉普拉斯阶梯网络将监督学习损失和非监督损失融合到了统一的框架进行半监督学习.在标准手写数据数据集MNIST和物体识别数据集CIFAR-10上进行了实验,结果表明,相对于阶梯网络和其他半监督方法,拉普拉斯阶梯网络都得到了更好的分类效果,是一种有效的半监督学习算法. 相似文献
4.
半监督拉普拉斯特征映射算法 总被引:1,自引:0,他引:1
为了使流形学习方法具有半监督的特点,利用流形上某些已知低维信息的数据去学习推测出其它数据的低维信息,扩大流形学习算法的应用范围,把拉普拉斯特征映射算法(Laplacian Eigenmap,LE)与半监督的机器学习相结合,提出一种半监督的拉普拉斯特征映射算法(semi-supervised Laplacian Eigenmap,SSLE),这种半监督的流形学习算法在分类识别等问题上,具有很好的效果.模拟实验和实际例子都表明了SSLE算法的有效性. 相似文献
5.
鲁棒拉普拉斯特征映射算法* 总被引:1,自引:0,他引:1
研究拉普拉斯特征映射算法(Laplacian eigenmap,LE)对离群点的敏感性,提出一种具有鲁棒性的拉普拉斯特征映射算法(robust Laplacian eigenmap,RLE)。该方法在离群点检测的基础上,利用鲁棒PCA算法(robust PCA,RPCA)对离群点进行局部光滑化处理,将离群点和其邻域投影到低维的局部切空间上,再构造能够准确反映离群点局部邻域关系的对应权值,减少离群点对Laplacian矩阵的影响。模拟实验和实际例子都证明,通过这种方法构造的鲁棒拉普拉斯特征映射算法,对于离群 相似文献
6.
7.
8.
LetG be a graph ofn vertices that can be drawn in the plane by straight-line segments so that nok+1 of them are pairwise crossing. We show thatG has at mostc
k
nlog2k–2
n edges. This gives a partial answer to a dual version of a well-known problem of Avital-Hanani, Erdós, Kupitz, Perles, and others. We also construct two point sets {p
1,,p
n
}, {q
1,,q
n
} in the plane such that any piecewise linear one-to-one mappingfR
2R
2 withf(pi)=qi (1in) is composed of at least (n
2) linear pieces. It follows from a recent result of Souvaine and Wenger that this bound is asymptotically tight. Both proofs are based on a relation between the crossing number and the bisection width of a graph.The first author was supported by NSF Grant CCR-91-22103, PSC-CUNY Research Award 663472, and OTKA-4269. An extended abstract of this paper was presented at the 10th Annual ACM Symposium on Computational Geometry, Stony Brook, NY, 1994. 相似文献
9.
A novel linear discriminant criterion function is proved to be equal to Fisher's criterion function. The analysis of the function is linked to spectral decomposition of the Laplacian of a graph. Moreover, the function is maximized using two algorithms. Experimental results show the effectiveness and some specific characteristics of our algorithms. 相似文献
10.
Neural Processing Letters - Recently, more and more multi-source data are widely used in many real world applications. This kind of data is high dimensional and comes from different resources,... 相似文献
11.
Laplacian operator-based edge detectors 总被引:1,自引:0,他引:1
Wang X 《IEEE transactions on pattern analysis and machine intelligence》2007,29(5):886-890
Laplacian operator is a second derivative operator often used in edge detection. Compared with the first derivative-based edge detectors such as Sobel operator, the Laplacian operator may yield better results in edge localization. Unfortunately, the Laplacian operator is very sensitive to noise. In this paper, based on the Laplacian operator, a model is introduced for making some edge detectors. Also, the optimal threshold is introduced for obtaining a maximum a posteriori (MAP) estimate of edges 相似文献
12.
The discrete Laplace-Beltrami operator for surface meshes is a fundamental building block for many (if not most) geometry processing algorithms. While Laplacians on triangle meshes have been researched intensively, yielding the cotangent discretization as the de-facto standard, the case of general polygon meshes has received much less attention. We present a discretization of the Laplace operator which is consistent with its expression as the composition of divergence and gradient operators, and is applicable to general polygon meshes, including meshes with non-convex, and even non-planar, faces. By virtually inserting a carefully placed point we implicitly refine each polygon into a triangle fan, but then hide the refinement within the matrix assembly. The resulting operator generalizes the cotangent Laplacian, inherits its advantages, and is empirically shown to be on par or even better than the recent polygon Laplacian of Alexa and Wardetzky [AW11] — while being simpler to compute. 相似文献
13.
This paper analyses the behavior in scale space of linear junction models (L, Y and X models), nonlinear junction models, and linear junction multi-models. The variation of the grey level is considered to be constant, linear or nonlinear in the case of linear models and constant for the other models. We are mainly interested in the extrema points provided by the Laplacian of the Gaussian function. Moreover, we show that for infinite models the Laplacian of the Gaussian at the corner point is not always equal to zero.Salvatore Tabbone received his Ph.D. in computer science from the Institut National Polytechnique de Lorraine (France) in 1994. He is currently an assistant professor at the University of Nancy2 (France) and a member of the QGAR research project on graphics recognition at the LORIA-INRIA research center. His research interests include computer vision, pattern recognition, content-based image retrieval, and document analysis and recognition.Laurent Alonso was a student of ENS Ulm from 1987 to 1991, he received the Ph.D. degree in Computer Science from the University of Paris XI, Orsay, France in 1992. From 1991 to 1995 he served as lecturer in the University of Nancy I (France). Actually, he is full researcher in INRIA (France). His research interests include realistic rendering, geometric algorithms and combinatorics.Djemel Ziou received the BEng Degree in Computer Science from the University of Annaba (Algeria) in 1984, and Ph.D. degree in Computer Science from the Institut National Polytechnique de Lorraine (INPL), France in 1991. From 1987 to 1993 he served as lecturer in several universities in France. During the same period, he was a researcher in the Centre de Recherche en Informatique de Nancy (CRIN) and the Institut National de Recherche en Informatique et Automatique (INRIA) in France. Presently, he is full Professor at the department of computer science at the University of Sherbrooke in Canada. He has served on numerous conference committees as member or chair. He heads the laboratory MOIVRE and the consortium CoRIMedia which he founded. His research interests include image processing, information retrieval, computer vision and pattern recognition. 相似文献
14.
15.
Damien Violeau Agnès Leroy Antoine Joly Alexis Hérault 《Computers & Mathematics with Applications》2018,75(10):3649-3662
In order to address the question of the SPH (Smoothed Particle Hydrodynamics) Laplacian conditioning, a spectral analysis of this discrete operator is performed. In the case of periodic Cartesian particle network, the eigenfunctions and eigenvalues of the SPH Laplacian are found on theoretical grounds. The theory agrees well with numerical eigenvalues. The effects of particle disorder and non-periodicity conditions are then investigated from numerical viewpoint. It is found that the matrix condition number is proportional to the square of the particle number per unit length, irrespective of the space dimension and kernel choice. 相似文献
16.
Sprekeler H 《Neural computation》2011,23(12):3287-3302
The past decade has seen a rise of interest in Laplacian eigenmaps (LEMs) for nonlinear dimensionality reduction. LEMs have been used in spectral clustering, in semisupervised learning, and for providing efficient state representations for reinforcement learning. Here, we show that LEMs are closely related to slow feature analysis (SFA), a biologically inspired, unsupervised learning algorithm originally designed for learning invariant visual representations. We show that SFA can be interpreted as a function approximation of LEMs, where the topological neighborhoods required for LEMs are implicitly defined by the temporal structure of the data. Based on this relation, we propose a generalization of SFA to arbitrary neighborhood relations and demonstrate its applicability for spectral clustering. Finally, we review previous work with the goal of providing a unifying view on SFA and LEMs. 相似文献
17.
《Graphical Models》2012,74(6):321-325
We present a collection of formulas for computing the curvature tensor on parametrized surfaces, on implicit surfaces, and on surfaces obtained by space deformation. 相似文献
18.
19.
Aleksandar Ilić 《Computers & Mathematics with Applications》2010,59(8):2776-2783
Let be a simple undirected graph with the characteristic polynomial of its Laplacian matrix , . It is well known that for trees the Laplacian coefficient is equal to the Wiener index of , while is equal to the modified hyper-Wiener index of the graph. In this paper, we characterize -vertex trees with given matching number which simultaneously minimize all Laplacian coefficients. The extremal tree is a spur, obtained from the star graph with vertices by attaching a pendant edge to each of certain non-central vertices of . In particular, minimizes the Wiener index, the modified hyper-Wiener index and the recently introduced Incidence energy of trees, defined as , where are the eigenvalues of signless Laplacian matrix . We introduced a general transformation which decreases all Laplacian coefficients simultaneously. In conclusion, we illustrate on examples of Wiener index and Incidence energy that the opposite problem of simultaneously maximizing all Laplacian coefficients has no solution. 相似文献
20.
The eigenvalues of the Dirichlet Laplacian are used to generate three different sets of features for shape recognition and classification in binary images. The generated features are rotation-, translation-, and size-invariant. The features are also shown to be tolerant of noise and boundary deformation. These features are used to classify hand-drawn, synthetic, and natural shapes with correct classification rates ranging from 88.9% to 99.2%. The classification was done using few features (only two features in some cases) and simple feedforward neural networks or minimum Euclidian distance. 相似文献