首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
根据带钢表面缺陷图像具有复杂纹理结构、包含大量干扰信息、具备高维非线性几何结构等特点,本文提出基于监督双限制连接Isomap方法的带钢表面缺陷图像降维方法(dls-Isomap).该方法以Isomap降维方法为基础,对其邻域图的连接方式进行K邻域(K-nearest neighbor,KNN)和ε-半径两个方面的限制性连接,并使用数据类别作为监督对类间邻域点进行扩展连接.针对多类Roll-swiss数据实验表明,dls-Isomap降维方法不仅能够在低维空间中完整嵌入所有数据点,而且能保持数据各类内和类间的几何结构,以及解决Isomap算法存在的“短路边”问题;针对带钢表面缺陷图像分类实验表明,基于dls-Isomap的新分类方法适合含水、油渍等干扰较多的带钢表面缺陷的分类任务,其中冷轧带钢5类缺陷识别率可以达78%.含水渍的热轧带钢缺陷识别率可以达到93%,其中水渍干扰图像的识别率达到97.6%.  相似文献   

2.
In 1980, Jackson proved that every 2-connected k-regular graph with at most 3k vertices is Hamiltonian. This result has been extended in several papers. In this note, we determine the minimum number of vertices in a connected k-regular graph that is not Hamiltonian, and we also solve the analogous problem for Hamiltonian paths. Further, we characterize the smallest connected k-regular graphs without a Hamiltonian cycle.  相似文献   

3.
Khuller and Schieber (1992) in [1] developed a constructive algorithm to prove that the existence of k-vertex independent trees in a k-vertex connected graph implies the existence of k-edge independent trees in a k-edge connected graph. In this paper, we show a counterexample where their algorithm fails.  相似文献   

4.
Isomap is one of widely used low-dimensional embedding methods, where geodesic distances on a weighted graph are incorporated with the classical scaling (metric multidimensional scaling). In this paper we pay our attention to two critical issues that were not considered in Isomap, such as: (1) generalization property (projection property); (2) topological stability. Then we present a robust kernel Isomap method, armed with such two properties. We present a method which relates the Isomap to Mercer kernel machines, so that the generalization property naturally emerges, through kernel principal component analysis. For topological stability, we investigate the network flow in a graph, providing a method for eliminating critical outliers. The useful behavior of the robust kernel Isomap is confirmed through numerical experiments with several data sets.  相似文献   

5.
A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that is guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. For a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.  相似文献   

6.
Manifold learning methods for unsupervised nonlinear dimensionality reduction have proven effective in the visualization of high dimensional data sets. When dealing with classification tasks, supervised extensions of manifold learning techniques, in which class labels are used to improve the embedding of the training points, require an appropriate method for out-of-sample mapping.In this paper we propose multi-output kernel ridge regression (KRR) for out-of-sample mapping in supervised manifold learning, in place of general regression neural networks (GRNN) that have been adopted by previous studies on the subject. Specifically, we consider a supervised agglomerative variant of Isomap and compare the performance of classification methods when the out-of-sample embedding is based on KRR and GRNN, respectively. Extensive computational experiments, using support vector machines and k-nearest neighbors as base classifiers, provide statistical evidence that out-of-sample mapping based on KRR consistently dominates its GRNN counterpart, and that supervised agglomerative Isomap with KRR achieves a higher accuracy than direct classification methods on most data sets.  相似文献   

7.
Manifold-ranking is a powerful method in semi-supervised learning, and its performance heavily depends on the quality of the constructed graph. In this paper, we propose a novel graph structure named k-regular nearest neighbor (k-RNN) graph as well as its constructing algorithm, and apply the new graph structure in the framework of manifold-ranking based retrieval. We show that the manifold-ranking algorithm based on our proposed graph structure performs better than that of the existing graph structures such as k-nearest neighbor (k-NN) graph and connected graph in image retrieval, 2D data clustering as well as 3D model retrieval. In addition, the automatic sample reweighting and graph updating algorithms are presented for the relevance feedback of our algorithm. Experiments demonstrate that the proposed algorithm outperforms the state-of-the-art algorithms.  相似文献   

8.
A k-disjoint path cover (k-DPC for short) of a graph is a set of k internally vertex-disjoint paths from given sources to sinks that collectively cover every vertex in the graph. In this paper, we establish a necessary and sufficient condition for the cube of a connected graph to have a 3-DPC joining a single source to three sinks. We also show that the cube of a connected graph always has a 3-DPC joining arbitrary two vertices.  相似文献   

9.
The k-nearest neighbors classifier is one of the most widely used methods of classification due to several interesting features, such as good generalization and easy implementation. Although simple, it is usually able to match, and even beat, more sophisticated and complex methods. However, no successful method has been reported so far to apply boosting to k-NN. As boosting methods have proved very effective in improving the generalization capabilities of many classification algorithms, proposing an appropriate application of boosting to k-nearest neighbors is of great interest.Ensemble methods rely on the instability of the classifiers to improve their performance, as k-NN is fairly stable with respect to resampling, these methods fail in their attempt to improve the performance of k-NN classifier. On the other hand, k-NN is very sensitive to input selection. In this way, ensembles based on subspace methods are able to improve the performance of single k-NN classifiers. In this paper we make use of the sensitivity of k-NN to input space for developing two methods for boosting k-NN. The two approaches modify the view of the data that each classifier receives so that the accurate classification of difficult instances is favored.The two approaches are compared with the classifier alone and bagging and random subspace methods with a marked and significant improvement of the generalization error. The comparison is performed using a large test set of 45 problems from the UCI Machine Learning Repository. A further study on noise tolerance shows that the proposed methods are less affected by class label noise than the standard methods.  相似文献   

10.
Given a node-weighted graph, the minimum-weighted dominating set (MWDS) problem is to find a minimum-weighted vertex subset such that, for any vertex, it is contained in this subset or it has a neighbor contained in this set. And the minimum-weighted connected dominating set (MWCDS) problem is to find a MWDS such that the graph induced by this subset is connected. In this paper, we study these two problems on a unit disk graph. A (4 +ε)-approximation algorithm for an MWDS based on a dynamic programming algorithm for a Min-Weight Chromatic Disk Cover is presented. Meanwhile, we also propose a (1 +ε)-approximation algorithm for the connecting part by showing a polynomial-time approximation scheme for a Node-Weighted Steiner Tree problem when the given terminal set is c-local and thus obtain a (5 +ε)-approximation algorithm for an MWCDS.  相似文献   

11.
A coloring of a graph is convex if it induces a partition of the vertices into connected subgraphs. Besides being an interesting property from a theoretical point of view, tests for convexity have applications in various areas involving large graphs. We study the important subcase of testing for convexity in trees. This problem is linked, among other possible applications, with the study of phylogenetic trees, which are central in genetic research, and are used in linguistics and other areas. We give a 1-sided, non-adaptive, distribution-free ε-test for the convexity of tree colorings. The query complexity of our test is O(k/ε), where k is the number of colors, and the additional computational complexity is O(n). On the other hand, we prove a lower bound of \(\Omega(\sqrt{k/\epsilon})\) on the query complexity of tests for convexity in the standard model, which applies even for (unweighted) paths. We also consider whether the dependency on k can be reduced in some cases, and provide an alternative testing algorithm for the case of paths. Then we investigate a variant of convexity, namely quasi-convexity, in which all but one of the colors are required to induce connected components. For this problem we provide a 1-sided, non-adaptive ε-test with query complexity O(k/ε 2) and time complexity O(kn/ε). For both our convexity and quasi-convexity tests, we show that, assuming that a query takes constant time, the time complexity can be reduced to a constant independent of n if we allow a preprocessing stage of time O(n) and O(n 2), respectively. Finally, we show how to test for a variation of convexity and quasi-convexity where the maximum number of connectivity classes of each color is allowed to be a constant value other than 1.  相似文献   

12.
13.
When performing visualization and classification, people often confront the problem of dimensionality reduction. Isomap is one of the most promising nonlinear dimensionality reduction techniques. However, when Isomap is applied to real-world data, it shows some limitations, such as being sensitive to noise. In this paper, an improved version of Isomap, namely S-Isomap, is proposed. S-Isomap utilizes class information to guide the procedure of nonlinear dimensionality reduction. Such a kind of procedure is called supervised nonlinear dimensionality reduction. In S-Isomap, the neighborhood graph of the input data is constructed according to a certain kind of dissimilarity between data points, which is specially designed to integrate the class information. The dissimilarity has several good properties which help to discover the true neighborhood of the data and, thus, makes S-Isomap a robust technique for both visualization and classification, especially for real-world problems. In the visualization experiments, S-Isomap is compared with Isomap, LLE, and WeightedIso. The results show that S-Isomap performs the best. In the classification experiments, S-Isomap is used as a preprocess of classification and compared with Isomap, WeightedIso, as well as some other well-established classification methods, including the K-nearest neighbor classifier, BP neural network, J4.8 decision tree, and SVM. The results reveal that S-Isomap excels compared to Isomap and WeightedIso in classification, and it is highly competitive with those well-known classification methods.  相似文献   

14.
In this paper we show that the graph of k-ary trees, connected by rotations, contains a Hamilton cycle. Our proof is constructive and thus provides a cyclic Gray code for k-ary trees. Furthermore, we identify a basic building block of this graph as the 1-skeleton of the polytopal complex dual to the lower faces of a certain cyclic polytope.  相似文献   

15.
The Graph Motif problem asks whether a given multiset of colors appears on a connected subgraph of a vertex-colored graph. The fastest known parameterized algorithm for this problem is based on a reduction to the k-Multilinear Detection (k-MlD) problem: the detection of multilinear terms of total degree k in polynomials presented as circuits. We revisit k-MlD and define k-CMlD, a constrained version of it which reflects Graph Motif more faithfully. We then give a fast algorithm for k-CMlD. As a result we obtain faster parameterized algorithms for Graph Motif and variants of it.  相似文献   

16.
Yuichi Yoshida  Hiro Ito 《Algorithmica》2012,62(3-4):701-712
We present an algorithm for testing the k-vertex-connectivity of graphs with the given maximum degree. The time complexity of the algorithm is independent of the number of vertices and edges of graphs. Fixed degree bound d, a graph G with n vertices and a maximum degree at most d is called ε-far from k-vertex-connectivity when at least $\frac{\epsilon dn}{2}$ edges must be added to or removed from G to obtain a k-vertex-connected graph with a maximum degree at most d. The algorithm always accepts every graph that is k-vertex-connected and rejects every graph that is ε-far from k-vertex-connectivity with a probability of at least 2/3. The algorithm runs in $O(d(\frac{c}{\epsilon d})^{k}\log\frac {1}{\epsilon d})$ time (c>1 is a constant) for (k?1)-vertex-connected graphs, and in $O(d(\frac{ck}{\epsilon d})^{k}\log\frac{k}{\epsilon d})$ time (c>1 is a constant) for general graphs. It is the first constant-time k-vertex-connectivity testing algorithm for general k≥4.  相似文献   

17.
We say that a distribution over {0,1}n is (ε,k)-wise independent if its restriction to every k coordinates results in a distribution that is ε-close to the uniform distribution. A natural question regarding (ε,k)-wise independent distributions is how close they are to some k-wise independent distribution. We show that there exist (ε,k)-wise independent distributions whose statistical distance is at least nO(k)·ε from any k-wise independent distribution. In addition, we show that for any (ε,k)-wise independent distribution there exists some k-wise independent distribution, whose statistical distance is nO(k)·ε.  相似文献   

18.
Even faster parameterized cluster deletion and cluster editing   总被引:1,自引:0,他引:1  
Cluster Deletion and Cluster Editing ask to transform a graph by at most k edge deletions or edge edits, respectively, into a cluster graph, i.e., disjoint union of cliques. Equivalently, a cluster graph has no conflict triples, i.e., two incident edges without a transitive edge. We solve the two problems in time O?(k1.415) and O?(k1.76), respectively. These results round off our earlier work by considerably improved time bounds. For Cluster Deletion we use a technique that cuts away small connected components that do no longer contribute to the exponential part of the time complexity. As this idea is simple and versatile, it may lead to improvements for several other parameterized graph problems. The improvement for Cluster Editing is achieved by using the full power of an earlier structure theorem for graphs where no edge is in three conflict triples.  相似文献   

19.
In the distribution-independent model of concept learning of Valiant, Angluin and Laird have introduced a formal model of noise process, called classification noise process, to study how to compensate for randomly introduced errors, or noise, in classifying the example data. In this article, we investigate the problem of designing efficient learning algorithms in the presence of classification noise. First, we develop a technique of building efficient robust learning algorithms, called noise-tolerant Occam algorithms, and show that using them, one can construct a polynomial-time algorithm for learning a class of Boolean functions in the presence of classification noise. Next, as an instance of such problems of learning in the presence of classification noise, we focus on the learning problem of Boolean functions represented by decision trees. We present a noise-tolerant Occam algorithm for k-DL (the class of decision lists with conjunctive clauses of size at most k at each decision introduced by Rivest) and hence conclude that k-DL is polynomially learnable in the presence of classification noise. Further, we extend the noise-tolerant Occam algorithm for k-DL to one for r-DT (the class of decision trees of rank at most r introduced by Ehrenfeucht and Haussler) and conclude that r-DT is polynomially learnable in the presence of classification noise.  相似文献   

20.
For a connected graph G=(V,E), a subset UV is a disconnected cut if U disconnects G and the subgraph G[U] induced by U is disconnected as well. A cut U is a k-cut if G[U] contains exactly k(≥1) components. More specifically, a k-cut U is a (k,?)-cut if V?U induces a subgraph with exactly ?(≥2) components. The Disconnected Cut problem is to test whether a graph has a disconnected cut and is known to be NP-complete. The problems k-Cut and (k,?)-Cut are to test whether a graph has a k-cut or (k,?)-cut, respectively. By pinpointing a close relationship to graph contractibility problems we show that (k,?)-Cut is in P for k=1 and any fixed constant ?≥2, while it is NP-complete for any fixed pair k,?≥2. We then prove that k-Cut is in P for k=1 and NP-complete for any fixed k≥2. On the other hand, for every fixed integer g≥0, we present an FPT algorithm that solves (k,?)-Cut on graphs of Euler genus at most g when parameterized by k+?. By modifying this algorithm we can also show that k-Cut is in FPT for this graph class when parameterized by k. Finally, we show that Disconnected Cut is solvable in polynomial time for minor-closed classes of graphs excluding some apex graph.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号