共查询到20条相似文献,搜索用时 15 毫秒
1.
Chan Antoni B. Vasconcelos Nuno 《IEEE transactions on pattern analysis and machine intelligence》2009,31(10):1862-1879
A novel video representation, the layered dynamic texture (LDT), is proposed. The LDT is a generative model, which represents a video as a collection of stochastic layers of different appearance and dynamics. Each layer is modeled as a temporal texture sampled from a different linear dynamical system. The LDT model includes these systems, a collection of hidden layer assignment variables (which control the assignment of pixels to layers), and a Markov random field prior on these variables (which encourages smooth segmentations). An EM algorithm is derived for maximum-likelihood estimation of the model parameters from a training video. It is shown that exact inference is intractable, a problem which is addressed by the introduction of two approximate inference procedures: a Gibbs sampler and a computationally efficient variational approximation. The trade-off between the quality of the two approximations and their complexity is studied experimentally. The ability of the LDT to segment videos into layers of coherent appearance and dynamics is also evaluated, on both synthetic and natural videos. These experiments show that the model possesses an ability to group regions of globally homogeneous, but locally heterogeneous, stochastic dynamics currently unparalleled in the literature. 相似文献
2.
3.
4.
Jean Berstel Luc Boasson Olivier Carton Isabelle Fagnot 《Theory of Computing Systems》2010,46(3):443-478
We consider Sturmian trees as a natural generalization of Sturmian words. A Sturmian tree is a tree having n+1 distinct subtrees of height n for each n. As for the case of words, Sturmian trees are irrational trees of minimal complexity. 相似文献
5.
Volume-Surface Trees 总被引:2,自引:0,他引:2
Tamy Boubekeur Wolfgang Heidrich Xavier Granier Christophe Schlick 《Computer Graphics Forum》2006,25(3):399-406
Many algorithms in computer graphics improve their efficiency by using Hierarchical Space Subdivision Schemes (HS3), such as octrees, kD‐trees or BSP trees. Such HS3 usually provide an axis‐aligned subdivision of the 3D space embedding a scene or an object. However, the purely volume‐based behavior of these schemes often leads to strongly imbalanced surface clustering. In this article, we introduce the VS‐Tree, an alternative HS3 providing efficient and accurate surface‐based hierarchical clustering via a combination of a global 3D decomposition at coarse subdivision levels, and a local 2D decomposition at fine levels near the surface. First, we show how to efficiently construct VS‐Trees over meshes and point‐based surfaces, and analyze the improvement it offers for cluster‐based surface simplification methods. Then we propose a new surface reconstruction algorithm based on the volume‐surface classification of the VS‐Tree. This new algorithm is faster than state‐of‐the‐art reconstruction methods and provides a final semi‐regular mesh comparable to the output of remeshing algorithms. 相似文献
6.
Vincent Danos Jean Krivine Fabien Tarissan 《Electronic Notes in Theoretical Computer Science》2007,175(1):19
RCCS is a variant of Milner's CCS where processes are allowed a controlled form of backtracking. It turns out that the RCCS reinterpretation of a CCS process is equivalent, in the sense of weak bisimilarity, to its causal transition system in CCS. This can be used to develop an efficient method for designing distributed algorithms, which we illustrate here by deriving a distributed algorithm for assembling trees. Such a problem requires solving a highly distributed consensus, and a comparison with a traditional CCS-based solution shows that the code we obtain is shorter, easier to understand, and easier to prove correct by hand, or even to verify. 相似文献
7.
Functional Trees 总被引:1,自引:0,他引:1
João Gama 《Machine Learning》2004,55(3):219-250
In the context of classification problems, algorithms that generate multivariate trees are able to explore multiple representation languages by using decision tests based on a combination of attributes. In the regression setting, model trees algorithms explore multiple representation languages but using linear models at leaf nodes. In this work we study the effects of using combinations of attributes at decision nodes, leaf nodes, or both nodes and leaves in regression and classification tree learning. In order to study the use of functional nodes at different places and for different types of modeling, we introduce a simple unifying framework for multivariate tree learning. This framework combines a univariate decision tree with a linear function by means of constructive induction. Decision trees derived from the framework are able to use decision nodes with multivariate tests, and leaf nodes that make predictions using linear functions. Multivariate decision nodes are built when growing the tree, while functional leaves are built when pruning the tree. We experimentally evaluate a univariate tree, a multivariate tree using linear combinations at inner and leaf nodes, and two simplified versions restricting linear combinations to inner nodes and leaves. The experimental evaluation shows that all functional trees variants exhibit similar performance, with advantages in different datasets. In this study there is a marginal advantage of the full model. These results lead us to study the role of functional leaves and nodes. We use the bias-variance decomposition of the error, cluster analysis, and learning curves as tools for analysis. We observe that in the datasets under study and for classification and regression, the use of multivariate decision nodes has more impact in the bias component of the error, while the use of multivariate decision leaves has more impact in the variance component. 相似文献
8.
We propose a new data representation for octrees and kd‐trees that improves upon memory size and algorithm speed of existing techniques. While pointerless approaches exploit the regular structure of the tree to facilitate efficient data access, their memory footprint becomes prohibitively large as the height of the tree increases. Pointerbased trees require memory consumption proportional to the number of tree nodes, thus exploiting the typical sparsity of large trees. Yet, their traversal is slowed by the need to follow explicit pointers across the different levels. Our solution is a pointerless approach that represents each tree level with its own matrix, as opposed to traditional pointerless trees that use only a single vector. This novel data organization allows us to fully exploit the tree's regular structure and improve the performance of tree operations. By using a sparse matrix data structure we obtain a representation that is suited for sparse and dense trees alike. In particular, it uses less total memory than pointer‐based trees even when the data set is extremely sparse. We show how our approach is easily implemented on the GPU and illustrate its performance in typical visualization scenarios. 相似文献
9.
10.
We prove convergence in distribution for the profile (the number of nodes at each level), normalized by its mean, of random
recursive trees when the limit ratio α of the level and the logarithm of tree size lies in [0,e). Convergence of all moments
is shown to hold only for α ∈ [0,1] (with only convergence of finite moments when α ∈ (1,e)). When the limit ratio is 0 or
1 for which the limit laws are both constant, we prove asymptotic normality for α = 0 and a "quicksort type" limit law for
α = 1, the latter case having additionally a small range where there is no fixed limit law. Our tools are based on the contraction
method and method of moments. Similar phenomena also hold for other classes of trees; we apply our tools to binary search
trees and give a complete characterization of the profile. The profiles of these random trees represent concrete examples
for which the range of convergence in distribution differs from that of convergence of all moments. 相似文献
11.
12.
The improved efficiency of microelectronics and the development of digital transmission systems have allowed experimentation with distributed computing architectures. The requirements of a distributed operating system are outlined, and the principles and architecture used in the system are explained. A method is described of managing the system directories, and an analysis is given of the results obtained. 相似文献
13.
Multivariate Decision Trees 总被引:24,自引:0,他引:24
Unlike a univariate decision tree, a multivariate decision tree is not restricted to splits of the instance space that are orthogonal to the features' axes. This article addresses several issues for constructing multivariate decision trees: representing a multivariate test, including symbolic and numeric features, learning the coefficients of a multivariate test, selecting the features to include in a test, and pruning of multivariate decision trees. We present several new methods for forming multivariate decision trees and compare them with several well-known methods. We compare the different methods across a variety of learning tasks, in order to assess each method's ability to find concise, accurate decision trees. The results demonstrate that some multivariate methods are in general more effective than others (in the context of our experimental assumptions). In addition, the experiments confirm that allowing multivariate tests generally improves the accuracy of the resulting decision tree over a univariate tree. 相似文献
14.
15.
Logistic Model Trees 总被引:2,自引:0,他引:2
Tree induction methods and linear models are popular techniques for supervised learning tasks, both for the prediction of nominal classes and numeric values. For predicting numeric quantities, there has been work on combining these two schemes into model trees, i.e. trees that contain linear regression functions at the leaves. In this paper, we present an algorithm that adapts this idea for classification problems, using logistic regression instead of linear regression. We use a stagewise fitting process to construct the logistic regression models that can select relevant attributes in the data in a natural way, and show how this approach can be used to build the logistic regression models at the leaves by incrementally refining those constructed at higher levels in the tree. We compare the performance of our algorithm to several other state-of-the-art learning schemes on 36 benchmark UCI datasets, and show that it produces accurate and compact classifiers.Editor Johannes FürnkranzThis is an extended version of a paper that appeared in the Proceedings of the 14th European Conference on Machine Learning (Landwehr et al., 2003). 相似文献
16.
Large databases of linguistic annotations are used for testing linguistic hypotheses and for training language processing models. These linguistic annotations are often syntactic or prosodic in nature, and have a hierarchical structure. Query languages are used to select particular structures of interest, or to project out large slices of a corpus for external analysis. Existing languages suffer from a variety of problems in the areas of expressiveness, efficiency, and naturalness for linguistic query. We describe the domain of linguistic trees and discuss the expressive requirements for a query language. Then we present a language that can express a wide range of queries over these trees, and show that the language is first-order complete over trees. 相似文献
17.
18.
A?data structure, called a biased range tree, is presented that preprocesses a set S of n points in ?2 and a query distribution D for 2-sided orthogonal range counting queries (a.k.a. dominance counting queries). The expected query time for this data structure, when queries are drawn according to?D, matches, to within a constant factor, that of the optimal comparison tree for S and D. The memory and preprocessing requirements of the data structure are? O(nlog?n). 相似文献
19.
20.