首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
Geometric model fitting is a typical chicken-&-egg problem: data points should be clustered based on geometric proximity to models whose unknown parameters must be estimated at the same time. Most existing methods, including generalizations of RANSAC, greedily search for models with most inliers (within a threshold) ignoring overall classification of points. We formulate geometric multi-model fitting as an optimal labeling problem with a global energy function balancing geometric errors and regularity of inlier clusters. Regularization based on spatial coherence (on some near-neighbor graph) and/or label costs is NP hard. Standard combinatorial algorithms with guaranteed approximation bounds (e.g. α-expansion) can minimize such regularization energies over a finite set of labels, but they are not directly applicable to a continuum of labels, e.g. R2{\mathcal{R}}^{2} in line fitting. Our proposed approach (PEaRL) combines model sampling from data points as in RANSAC with iterative re-estimation of inliers and models’ parameters based on a global regularization functional. This technique efficiently explores the continuum of labels in the context of energy minimization. In practice, PEaRL converges to a good quality local minimum of the energy automatically selecting a small number of models that best explain the whole data set. Our tests demonstrate that our energy-based approach significantly improves the current state of the art in geometric model fitting currently dominated by various greedy generalizations of RANSAC.  相似文献   

2.
Component-based software development is a promising approach for controlling the complexity and quality of software systems. Nevertheless, recent advances in quality control techniques do not seem to keep up with the growing complexity of embedded software; embedded systems often consist of dozens to hundreds of software/hardware components that exhibit complex interaction behavior. Unanticipated quality defects in a component can be a major source of system failure. To address this issue, this paper suggests a design verification approach integrated into the model-driven, component-based development methodology Marmot. The notion of abstract components—the basic building blocks of Marmot—helps to lift the level of abstraction, facilitates high-level reuse, and reduces verification complexity by localizing verification problems between abstract components before refinement and after refinement. This enables the identification of unanticipated design errors in the early stages of development. This work introduces the Marmot methodology, presents a design verification approach in Marmot, and demonstrates its application on the development of a μ-controller-based abstraction of a car mirror control system. An application on TinyOS shows that the approach helps to reuse models as well as their verification results in the development process.  相似文献   

3.
Bounded model checking of software using SMT solvers instead of SAT solvers   总被引:1,自引:0,他引:1  
C bounded model checking (cbmc) has proved to be a successful approach to automatic software analysis. The key idea is to (i) build a propositional formula whose models correspond to program traces (of bounded length) that violate some given property and (ii) use state-of-the-art SAT solvers to check the resulting formulae for satisfiability. In this paper, we propose a generalisation of the cbmc approach on the basis of an encoding into richer (but still decidable) theories than propositional logic. We show that our approach may lead to considerably more compact formulae than those obtained with cbmc. We have built a prototype implementation of our technique that uses a satisfiability modulo theories (SMT) solver to solve the resulting formulae. Computer experiments indicate that our approach compares favourably with—and on some significant problems outperforms—cbmc.  相似文献   

4.
In many AI fields, one must face the problem of finding a solution that is as close as possible to a given configuration. This paper addresses this problem in a propositional framework. We introduce the decision problem distance-sat, which consists in determining whether a propositional formula admits a model that disagrees with a given partial interpretation on at most d variables. The complexity of distance-sat and of several restrictions of it are identified. Two algorithms based on the well-known Davis/Logemann/Loveland search procedure for the satisfiability problem sat are presented so as to solve distance-sat for CNF formulas. Their computational behaviors are compared with the ones offered by sat solvers on sat encodings of distance-sat instances. The empirical evaluation allows us to draw firm conclusions about the respective performances of the algorithms and to relate the difficulty of distance-sat with the difficulty of sat from the practical side. A preliminary version of this paper appeared with the title “distance-sat: Complexity and Algorithms” in the proceedings of the 16 th National Conference on Artificial Intelligence (AAAI’99), pages 642–647, 1999.  相似文献   

5.
Given a graph with edges colored Red and Blue, we study the problem of sampling and approximately counting the number of matchings with exactly k Red edges. We solve the problem of estimating the number of perfect matchings with exactly k Red edges for dense graphs. We study a Markov chain on the space of all matchings of a graph that favors matchings with k Red edges. We show that it is rapidly mixing using non-traditional canonical paths that can backtrack. We show that this chain can be used to sample matchings in the 2-dimensional toroidal lattice of any fixed size with k Red edges, where the horizontal edges are Red and the vertical edges are Blue. An extended abstract appeared in J.R. Correa, A. Hevia and M.A. Kiwi (eds.) Proceedings of the 7th Latin American Theoretical Informatics Symposium, LNCS 3887, pp. 190–201, Springer, 2006. N. Bhatnagar’s and D. Randall’s research was supported in part by NSF grants CCR-0515105 and DMS-0505505. V.V. Vazirani’s research was supported in part by NSF grants 0311541, 0220343 and CCR-0515186. N. Bhatnagar’s and E. Vigoda’s research was supported in part by NSF grant CCR-0455666.  相似文献   

6.
Model Checking with Strong Fairness   总被引:1,自引:0,他引:1  
In this paper we present a coherent framework for symbolic model checking of linear-time temporal logic (ltl) properties over finite state reactive systems, taking full fairness constraints into consideration. We use the computational model of a fair discrete system (fds) which takes into account both justice (weak fairness) and compassion (strong fairness). The approach presented here reduces the model-checking problem into the question of whether a given fds is feasible (i.e. has at least one computation). The contribution of the paper is twofold: On the methodological level, it presents a direct self-contained exposition of full ltl symbolic model checking without resorting to reductions to either μ-calculus or ctl. On the technical level, it extends previous methods by dealing with compassion at the algorithmic level instead of either adding it to the specification, or transforming compassion to justice. Finally, we extend ctl with past operators, and show that the basic symbolic feasibility algorithm presented here, can be used to model check an arbitrary ctl formula over an fds with full fairness constraints. This research was supported in part by an infra-structure grant from the Israeli Ministry of Science and Art, a grant from the U.S.-Israel Binational Science Foundation, and a gift from Intel.  相似文献   

7.
Model checking based on Petri net unfoldings is an approach widely applied to cope with the state space explosion problem. In this paper, we propose a new condensed representation of a Petri net’s behaviour called merged processes, which copes well not only with concurrency, but also with other sources of state space explosion, viz sequences of choices and non-safeness. Moreover, this representation is sufficiently similar to the traditional unfoldings, so that a large body of results developed for the latter can be re-used. Experimental results indicate that the proposed representation of a Petri net’s behaviour alleviates the state space explosion problem to a significant degree and is suitable for model checking.V. Khomenko is a Royal Academy of Engineering/Epsrc Research Fellow supported by the RAEng/Epsrc grant EP/C53400X/1 (Davac).M. Koutny is supported by the EC IST grant 511599 (Rodin).W. Vogler is supported by the DFG-project STG-Dekomposition VO 615/7-1  相似文献   

8.
A WSDL-based type system for asynchronous WS-BPEL processes   总被引:1,自引:0,他引:1  
We tackle the problem of providing rigorous formal foundations to current software engineering technologies for web services, and especially to WSDL and WS-BPEL, two of the most used XML-based standard languages for web services. We focus on a simplified fragment of WS-BPEL sufficiently expressive to model asynchronous interactions among web services in a network context. We present this language as a process calculus-like formalism, that we call ws-calculus, for which we define an operational semantics and a type system. The semantics provides a precise operational model of programs, while the type system forces a clean programming discipline for integrating collaborating services. We prove that the operational semantics of ws-calculus and the type system are ‘sound’ and apply our approach to some illustrative examples. We expect that our formal development can be used to make the relationship between WS-BPEL programs and the associated WSDL documents precise and to support verification of their conformance.  相似文献   

9.
The agent design problem is as follows: given a specification of an environment, together with a specification of a task, is it possible to construct an agent that can be guaranteed to successfully accomplish the task in the environment? In this article, we study the computational complexity of the agent design problem for tasks that are of the form “achieve this state of affairs” or “maintain this state of affairs.” We consider three general formulations of these problems (in both non-deterministic and deterministic environments) that differ in the nature of what is viewed as an “acceptable” solution: in the least restrictive formulation, no limit is placed on the number of actions an agent is allowed to perform in attempting to meet the requirements of its specified task. We show that the resulting decision problems are intractable, in the sense that these are non-recursive (but recursively enumerable) for achievement tasks, and non-recursively enumerable for maintenance tasks. In the second formulation, the decision problem addresses the existence of agents that have satisfied their specified task within some given number of actions. Even in this more restrictive setting the resulting decision problems are either pspace-complete or np-complete. Our final formulation requires the environment to be history independent and bounded. In these cases polynomial time algorithms exist: for deterministic environments the decision problems are nl-complete; in non-deterministic environments, p-complete.  相似文献   

10.
This paper describes a framework for retrospective document conversion in the library domain. Drawing on the experience and insight gained from projects launched over the present decade by the European Commission, it outlines the requirements for solving the problem of retroconversion and traces the main phases of associated processing. To highlight the main problems encountered in this area, the paper also outlines studies conducted by our group in the more project for the retroconversion of old catalogues belonging to two different libraries: National French Library and Royal Belgian Library. For the French Library, the idea was to study the feasibility of a recognition approach avoiding the use of ocr and basing the strategy mainly on visual features. The challenge was to recognize a logical structure from its physical aspects. The modest results obtained from experiments for this first study led us, in the second study, to base the structural recognition methodology more on the logical aspects by focusing the analysis on the content. Furthermore, for the Belgian references, the aim was to convert reference catalogues into a more conventional unimarc format while respecting the industrial constraints. Without manual intervention, 75% rate of correct recognition was obtained on 11 catalogues containing about 4548 references. Received March 10, 1998 / Revised August 12, 1998  相似文献   

11.
Given a graph with a source and a sink node, the NP-hard maximum k-splittable s,t-flow (M k SF) problem is to find a flow of maximum value from s to t with a flow decomposition using at most k paths. The multicommodity variant of this problem is a natural generalization of disjoint paths and unsplittable flow problems. Constructing a k-splittable flow requires two interdepending decisions. One has to decide on k paths (routing) and on the flow values for the paths (packing). We give efficient algorithms for computing exact and approximate solutions by decoupling the two decisions into a first packing step and a second routing step. Usually the routing is considered before the packing. Our main contributions are as follows: (i) We show that for constant k a polynomial number of packing alternatives containing at least one packing used by an optimal M k SF solution can be constructed in polynomial time. If k is part of the input, we obtain a slightly weaker result. In this case we can guarantee that, for any fixed ε>0, the computed set of alternatives contains a packing used by a (1−ε)-approximate solution. The latter result is based on the observation that (1−ε)-approximate flows only require constantly many different flow values. We believe that this observation is of interest in its own right. (ii) Based on (i), we prove that, for constant k, the M k SF problem can be solved in polynomial time on graphs of bounded treewidth. If k is part of the input, this problem is still NP-hard and we present a polynomial time approximation scheme for it.  相似文献   

12.
13.
Romeo Rizzi 《Algorithmica》2009,53(3):402-424
In the last years, new variants of the minimum cycle basis (MCB) problem and new classes of cycle bases have been introduced, as motivated by several applications from disparate areas of scientific and technological inquiry. At present, the complexity status of the MCB problem is settled only for undirected, directed, and strictly fundamental cycle bases (SFCB’s). Weakly fundamental cycle bases (WFCB’s) form a natural superclass of SFCB’s. A cycle basis of a graph G is a WFCB iff ν=0 or there exists an edge e of G and a circuit C i in such that is a WFCB of Ge. WFCB’s still possess several of the nice properties offered by SFCB’s. At the same time, several classes of graphs enjoying WFCB’s of cost asymptotically inferior to the cost of the cheapest SFCB’s have been found and exhibited in the literature. Considered also the computational difficulty of finding cheap SFCB’s, these works advocated an in-depth study of WFCB’s. In this paper, we settle the complexity status of the MCB problem for WFCB’s (the MWFCB problem). The problem turns out to be -hard. However, in this paper, we also offer a simple and practical 2⌈log 2 n⌉-approximation algorithm for the MWFCB problem. In O(n ν) time, this algorithm actually returns a WFCB whose cost is at most 2⌈log 2 n⌉∑ eE(G) w e , thus allowing a fast 2⌈log 2 n⌉-approximation also for the MCB problem. With this algorithm, we provide tight bounds on the cost of any MCB and MWFCB.  相似文献   

14.
Hierarchical Fusion of Multiple Classifiers for Hyperspectral Data Analysis   总被引:3,自引:0,他引:3  
Many classification problems involve high dimensional inputs and a large number of classes. Multiclassifier fusion approaches to such difficult problems typically centre around smart feature extraction, input resampling methods, or input space partitioning to exploit modular learning. In this paper, we investigate how partitioning of the output space (i.e. the set of class labels) can be exploited in a multiclassifier fusion framework to simplify such problems and to yield better solutions. Specifically, we introduce a hierarchical technique to recursively decompose a C-class problem into C_1 two-(meta) class problems. A generalised modular learning framework is used to partition a set of classes into two disjoint groups called meta-classes. The coupled problems of finding a good partition and of searching for a linear feature extractor that best discriminates the resulting two meta-classes are solved simultaneously at each stage of the recursive algorithm. This results in a binary tree whose leaf nodes represent the original C classes. The proposed hierarchical multiclassifier framework is particularly effective for difficult classification problems involving a moderately large number of classes. The proposed method is illustrated on a problem related to classification of landcover using hyperspectral data: a 12-class AVIRIS subset with 180 bands. For this problem, the classification accuracies obtained were superior to most other techniques developed for hyperspectral classification. Moreover, the class hierarchies that were automatically discovered conformed very well with human domain experts’ opinions, which demonstrates the potential of using such a modular learning approach for discovering domain knowledge automatically from data. Received: 21 November 2000, Received in revised form: 02 November 2001, Accepted: 13 December 2001  相似文献   

15.
We show that several problems that are hard for various parameterized complexity classes on general graphs, become fixed parameter tractable on graphs with no small cycles. More specifically, we give fixed parameter tractable algorithms for Dominating Set, t -Vertex Cover (where we need to cover at least t edges) and several of their variants on graphs with girth at least five. These problems are known to be W[i]-hard for some i≥1 in general graphs. We also show that the Dominating Set problem is W[2]-hard for bipartite graphs and hence for triangle free graphs. In the case of Independent Set and several of its variants, we show these problems to be fixed parameter tractable even in triangle free graphs. In contrast, we show that the Dense Subgraph problem where one is interested in finding an induced subgraph on k vertices having at least l edges, parameterized by k, is W[1]-hard even on graphs with girth at least six. Finally, we give an O(log p) ratio approximation algorithm for the Dominating Set problem for graphs with girth at least 5, where p is the size of an optimum dominating set of the graph. This improves the previous O(log n) factor approximation algorithm for the problem, where n is the number of vertices of the input graph. A preliminary version of this paper appeared in the Proceedings of 10th Scandinavian Workshop on Algorithm Theory (SWAT), Lecture Notes in Computer Science, vol. 4059, pp. 304–315, 2006.  相似文献   

16.
Using Biologically Inspired Features for Face Processing   总被引:1,自引:0,他引:1  
In this paper, we show that a new set of visual features, derived from a feed-forward model of the primate visual object recognition pathway proposed by Riesenhuber and Poggio (R&P Model) (Nature Neurosci. 2(11):1019–1025, 1999) is capable of matching the performance of some of the best current representations for face identification and facial expression recognition. Previous work has shown that the Riesenhuber and Poggio Model features can achieve a high level of performance on object recognition tasks (Serre, T., et al. in IEEE Comput. Vis. Pattern Recognit. 2:994–1000, 2005). Here we modify the R&P model in order to create a new set of features useful for face identification and expression recognition. Results from tests on the FERET, ORL and AR datasets show that these features are capable of matching and sometimes outperforming other top visual features such as local binary patterns (Ahonen, T., et al. in 8th European Conference on Computer Vision, pp. 469–481, 2004) and histogram of gradient features (Dalal, N., Triggs, B. in International Conference on Computer Vision & Pattern Recognition, pp. 886–893, 2005). Having a model based on shared lower level features, and face and object recognition specific higher level features, is consistent with findings from electrophysiology and functional magnetic resonance imaging experiments. Thus, our model begins to address the complete recognition problem in a biologically plausible way.  相似文献   

17.
The last decade progresses have led the Satisfiability Problem (sat) to be a great and competitive practical approach to solve a wide range of industrial and academic problems. Thanks to these progresses, the size and difficulty of the sat instances has grown significantly. Among the recent solvers, a few are parallel and most of them use the message passing paradigm. In a previous work by Vander-Swalmen et al. (IWOMP, 146–157, 2008), we presented a fine grain parallel sat solver designed for shared memory using OpenMP and named mtss, for Multi Threaded Sat Solver. mtss extends the “guiding path” notion and uses a collaborative approach where a rich thread is in charge of the search-tree evaluation and where a set of poor threads yield logical or heuristics information to simplify the rich task. In this paper, we extend the poor thread abilities of mtss and present extensive comparative results on random 3-sat problems. These new experimentations show that fine grained techniques associated to poor tasks within the framework of mtss can achieve very interesting speedup on multi-core processors.  相似文献   

18.
We study a family of problems, called Maximum Solution (Max Sol), where the objective is to maximise a linear goal function over the feasible integer assignments to a set of variables subject to a set of constraints. When the domain is Boolean (i.e. restricted to {0,1}), the maximum solution problem is identical to the well-studied Max Ones problem, and the complexity and approximability is completely understood for all restrictions on the underlying constraints. We continue this line of research by considering the Max Sol problem for relations defined by regular signed logic over finite subsets of the natural numbers; the complexity of the corresponding decision problem has recently been classified by Creignou et al. (Theory Comput. Syst. 42(2):239–255, 2008). We give sufficient conditions for when such problems are polynomial-time solvable and we prove that they are APX-hard otherwise. Similar dichotomies are also obtained for variants of the Max Sol problem.  相似文献   

19.
The Degree- Δ Closest Phylogenetic k th Root Problem (ΔCPR k ) is the problem of finding a (phylogenetic) tree T from a given graph G=(V,E) such that (1) the degree of each internal node in T is at least 3 and at most Δ, (2) the external nodes (i.e. leaves) of T are exactly the elements of V, and (3) the number of disagreements, i.e., |E {{u,v} : u,v are leaves of T and d T (u,v)≤k}|, is minimized, where d T (u,v) denotes the distance between u and v in tree T. This problem arises from theoretical studies in evolutionary biology and generalizes several important combinatorial optimization problems such as the maximum matching problem. Unfortunately, it is known to be NP-hard for all fixed constants Δ,k such that either both Δ≥3 and k≥3, or Δ>3 and k=2. This paper presents a polynomial-time 8-approximation algorithm for Δ CPR 2 for any fixed Δ>3, a quadratic-time 12-approximation algorithm for 3CPR 3, and a polynomial-time approximation scheme for the maximization version of Δ CPR k for any fixed Δ and k.  相似文献   

20.
An a posteriori (off-line) approach to solving the problem of concurrent detection and identification of quasiperiodic fragments in a numeric sequence from their pieces is considered. A solution of the problem is given for the case of an unknown number of sought fragments. It is supposed that: (1) each fragment to be found is identical with an element from a given alphabet of reference sequences comprising the same number of terms; (2) only a piece (part) of each sought fragment is potentially available for processing, with their inaccessible parts interpreted as missing data; (3) the ordinal numbers of sequence elements corresponding to the beginning of the sought fragment and to the endpoints of its piece are predetermined (nonrandom) variables, with piece endpoints varying from fragment to fragment and the sought fragments occurring in the sequence quasiperiodically; and (4) a Gaussian uncorrelated noise conceals from observation the input sequence containing quasiperiodic reference sequences. It is shown that the problem under study is, in its nature, equivalent to testing a set of hypothesis about the mean of a random Gaussian vector. The cardinality of this set grows exponentially with the vector dimension, which is the number of sequence elements. An efficient algorithm of a posteriori type, ensuring detection and identification under the maximum likelihood criterion, is developed and mathematically substantiated. Estimates for the time and space complexity of the algorithm as a function of problem parameters are given. Results of numeric simulation are presented. Aleksandr V. Kel’manov. Born April 25, 1952. Graduated from Izhevsk State Technical University in 1974. Received candidate’s degree in 1980 and doctoral degree in 1994. Leading researcher of the Sobolev Institute of Mathematics, Siberian Division, Russian Academy of Sciences. Scientific interests: mathematical methods for pattern recognition; discrete optimization; effective algorithms for the analysis and recognition of random sequences; algorithms for the solution of applied problems; and methods and algorithms for the processing, recognition, and synthesis of voice signals. Author of more than 140 papers. Sergei A. Khamidullin. Born March 28, 1952. Graduated from Novosibirsk State University in 1974. Received candidate’s degree in 1997. Senior researcher of the Sobolev Institute of Mathematics, Siberian Division, Russian Academy of Sciences. Scientific interests: mathematical methods for pattern recognition; discrete optimization; effective algorithms for the analysis and recognition of random sequences; algorithms for the solution of applied problems; and methods and algorithms for the processing, recognition, and synthesis of voice signals. Author of more than 85 papers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号