共查询到20条相似文献,搜索用时 98 毫秒
1.
在以可达路径决策为核心的图形轮廓提取中,为有效地解决路由决策困难及路径特征值精度等问题,提出了图形轮廓分层路由提取的MST生长算法.该算法将图形路由拓扑结构划分为域内路由和域间路由.域内路由对非支配点关联路径进行重组,建立以支配点为节点的图形有权无向图;域间路由以无向图最小生成树MST为基础,利用树节点间唯一可达特性构造MST生长算法.最后综合这2个层次实现完整的图形轮廓提取.通过算例及应用证明了文中算法的可行性和有效性. 相似文献
2.
3.
4.
基于层次性断层数据的三维重构技术 总被引:1,自引:0,他引:1
针对基于层次性断层数据的三维重构,提出了通过引入嵌套矩阵构造嵌套树,然后再通过构造最小生成树的方法来解决轮廓线相邻层次间的对应问题,并以两轮廓环相互覆盖区域的大小作为约束条件。该方法把基于覆盖的对应方法和全局轮廓对应方法结合起来,降低了重构时轮廓拓扑关系判断的复杂性,又能准确地确定轮廓对应关系。 相似文献
5.
模拟视觉组织获取轮廓的研究多基于局部视觉编组线索,使得编组线索在视觉上缺乏完整性.为此提出一种具有全局约束的层次编组模型.首先,自适应尺度小波检测图像边缘,显著性重要边缘作为轮廓编组候选集.其次,根据视觉编组规则特性提出层次编组模型 ,用格式塔规则如对称性和平行性作为全局线索指导格式塔规则如接近性、连续性、共线性及共曲线性等编组线索完成轮廓编组,获取符合视觉特性的轮廓.实验表明该模型在不损失视觉合理性的同时减少了编组的二义性,增强了算法的效率和健壮性. 相似文献
6.
汪璟玢 《计算机工程与应用》2014,50(5):112-115
研究R树特点,考虑了离群点对R树结点构造的影响,结合改进的k-medoids聚类算法提出了一种新的R树构造算法。与传统R树相比,新算法下构造的R树结点更加紧凑。通过实验证明,该优化算法构造的R树在查询性能方面的改进是明显的。 相似文献
7.
提出了一种序列B超图像的轮廓提取算法,该算法通过构造边缘轮廓模板来进行B超图像中感兴趣轮廓的提取,实验证明该算法能够较快速地对序列B超图像进行轮廓提取。 相似文献
8.
利用Zmap模型在切削层上构造物体区域的行程编码,根据行程编码的连通关系链接节点形成边界,通过确定边界之间的包容关系构造边界描述树,从而得到切削区域的拓扑结构.该算法简单、效率高,能自动识别轮廓和岛屿,并已在实际中得到很好的应用. 相似文献
9.
10.
轮廓角点检测与特征构造是基于轮廓角点的RSI多目标识别算法的关键。针对现有的轮廓角点检测方法在准确性与抗噪能力的不足,提出一种改进的轮廓角点检测算法,构造一种基于目标主轴与轮廓角点的特征串,利用动态规划算法计算特征串间的相似度进行目标识别。算法中把目标主轴的旋转角度作为目标的姿态角。实验证明该算法能够快速地识别出目标的旋转角度,对目标进行分类,且具有平移不变性、旋转不变性、尺度不变性以及较好的抗噪能力。 相似文献
11.
Anna-Pia Lohfink Florian Wetzels Jonas Lukasczyk Gunther H. Weber Christoph Garth 《Computer Graphics Forum》2020,39(3):343-355
We describe a novel technique for the simultaneous visualization of multiple scalar fields, e.g. representing the members of an ensemble, based on their contour trees. Using tree alignments, a graph-theoretic concept similar to edit distance mappings, we identify commonalities across multiple contour trees and leverage these to obtain a layout that can represent all trees simultaneously in an easy-to-interpret, minimally-cluttered manner. We describe a heuristic algorithm to compute tree alignments for a given similarity metric, and give an algorithm to compute a joint layout of the resulting aligned contour trees. We apply our approach to the visualization of scalar field ensembles, discuss basic visualization and interaction possibilities, and demonstrate results on several analytic and real-world examples. 相似文献
12.
Functional Trees 总被引:1,自引:0,他引:1
João Gama 《Machine Learning》2004,55(3):219-250
In the context of classification problems, algorithms that generate multivariate trees are able to explore multiple representation languages by using decision tests based on a combination of attributes. In the regression setting, model trees algorithms explore multiple representation languages but using linear models at leaf nodes. In this work we study the effects of using combinations of attributes at decision nodes, leaf nodes, or both nodes and leaves in regression and classification tree learning. In order to study the use of functional nodes at different places and for different types of modeling, we introduce a simple unifying framework for multivariate tree learning. This framework combines a univariate decision tree with a linear function by means of constructive induction. Decision trees derived from the framework are able to use decision nodes with multivariate tests, and leaf nodes that make predictions using linear functions. Multivariate decision nodes are built when growing the tree, while functional leaves are built when pruning the tree. We experimentally evaluate a univariate tree, a multivariate tree using linear combinations at inner and leaf nodes, and two simplified versions restricting linear combinations to inner nodes and leaves. The experimental evaluation shows that all functional trees variants exhibit similar performance, with advantages in different datasets. In this study there is a marginal advantage of the full model. These results lead us to study the role of functional leaves and nodes. We use the bias-variance decomposition of the error, cluster analysis, and learning curves as tools for analysis. We observe that in the datasets under study and for classification and regression, the use of multivariate decision nodes has more impact in the bias component of the error, while the use of multivariate decision leaves has more impact in the variance component. 相似文献
13.
14.
Combining Classifiers with Meta Decision Trees 总被引:4,自引:0,他引:4
The paper introduces meta decision trees (MDTs), a novel method for combining multiple classifiers. Instead of giving a prediction, MDT leaves specify which classifier should be used to obtain a prediction. We present an algorithm for learning MDTs based on the C4.5 algorithm for learning ordinary decision trees (ODTs). An extensive experimental evaluation of the new algorithm is performed on twenty-one data sets, combining classifiers generated by five learning algorithms: two algorithms for learning decision trees, a rule learning algorithm, a nearest neighbor algorithm and a naive Bayes algorithm. In terms of performance, stacking with MDTs combines classifiers better than voting and stacking with ODTs. In addition, the MDTs are much more concise than the ODTs and are thus a step towards comprehensible combination of multiple classifiers. MDTs also perform better than several other approaches to stacking. 相似文献
15.
16.
Khuller and Schieber (1992) in [1] developed a constructive algorithm to prove that the existence of k-vertex independent trees in a k-vertex connected graph implies the existence of k-edge independent trees in a k-edge connected graph. In this paper, we show a counterexample where their algorithm fails. 相似文献
17.
Hans de Nivelle 《Journal of Automated Reasoning》1998,20(1-2):5-25
We present a modification of the unification algorithm that is adapted to the extraction of simultaneously unifiable literals from discrimination trees. The algorithm is useful for efficient implementation of binary resolution, hyperresolution, and paramodulation. The algorithm is able to traverse simultaneously more than one discrimination tree and to construct a unifier at the same time. In this way backtracking is reduced. 相似文献
18.
Girijanandan Nucha Georges‐Pierre Bonneau Stefanie Hahmann Vijay Natarajan 《Computer Graphics Forum》2017,36(3):23-33
Contour trees are extensively used in scalar field analysis. The contour tree is a data structure that tracks the evolution of level set topology in a scalar field. Scalar fields are typically available as samples at vertices of a mesh and are linearly interpolated within each cell of the mesh. A more suitable way of representing scalar fields, especially when a smoother function needs to be modeled, is via higher order interpolants. We propose an algorithm to compute the contour tree for such functions. The algorithm computes a local structure by connecting critical points using a numerically stable monotone path tracing procedure. Such structures are computed for each cell and are stitched together to obtain the contour tree of the function. The algorithm is scalable to higher degree interpolants whereas previous methods were restricted to quadratic or linear interpolants. The algorithm is intrinsically parallelizable and has potential applications to isosurface extraction. 相似文献
19.
For two disjoint sets of variables, X and Y , and a class of functions C , we define DT(X,Y,C) to be the class of all decision trees over X whose leaves are functions from C over Y . We study the learnability of DT(X,Y,C) using membership and equivalence queries. Boolean decision trees, , were shown to be exactly learnable by Bshouty but does this imply the learnability of decision trees that have nonboolean
leaves? A simple encoding of all possible leaf values will work provided that the size of C is reasonable. Our investigation involves several cases where simple encoding is not feasible, i.e., when |C| is large.
We show how to learn decision trees whose leaves are learnable concepts belonging to a class C , DT(X,Y,C) , when the separation between the variables X and Y is known. A simple algorithm for decision trees whose leaves are constants, , is also presented.
Each case above requires at least s separate executions of the algorithm due to Bshouty where s is the number of distinct leaves of the tree but we show that if C is a bounded lattice, is learnable using only one execution of this algorithm.
Received September 23, 1995; revised January 15, 1996. 相似文献