首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper concerns the exploitation of user transparent inherent parallelism of pure Prolog programs using program transformation. We describe a novel paradigmenumerate-and-filter for transforming generate-and-test programs for execution under the committed-choice model extended to incorporate multiple solutions based on set enumeration. The paradigm simulates OR-parallelism by stream AND-parallelism integrating OR-parallelism, AND-parallelism, and stream parallelism. Generate-and-test programs are classified into three categories:simple generate-and-test, recursively embedded generate-and-test, and deeply intertwined generate-and-test. The intermediate programs are further transformed to reduce structure copying and metacalls. Algorithms are presented and demonstrated by transforming the representative examples of different classes of generate-and-test programs to Flat Concurrent Prolog equivalents. Statistics show that the techniques are efficient.Funded in part by Cleveland Advanced Manufacturing Program through the State of Ohio as a part of its core research program grant to Center of Automation and Intelligent Systems Research, Case Western Reserve University and NSF equipment grant CDA-8820390 to Kent State University.  相似文献   

2.
Multi-instance clustering with applications to multi-instance prediction   总被引:2,自引:0,他引:2  
In the setting of multi-instance learning, each object is represented by a bag composed of multiple instances instead of by a single instance in a traditional learning setting. Previous works in this area only concern multi-instance prediction problems where each bag is associated with a binary (classification) or real-valued (regression) label. However, unsupervised multi-instance learning where bags are without labels has not been studied. In this paper, the problem of unsupervised multi-instance learning is addressed where a multi-instance clustering algorithm named Bamic is proposed. Briefly, by regarding bags as atomic data items and using some form of distance metric to measure distances between bags, Bamic adapts the popular k -Medoids algorithm to partition the unlabeled training bags into k disjoint groups of bags. Furthermore, based on the clustering results, a novel multi-instance prediction algorithm named Bartmip is developed. Firstly, each bag is re-represented by a k-dimensional feature vector, where the value of the i-th feature is set to be the distance between the bag and the medoid of the i-th group. After that, bags are transformed into feature vectors so that common supervised learners are used to learn from the transformed feature vectors each associated with the original bag’s label. Extensive experiments show that Bamic could effectively discover the underlying structure of the data set and Bartmip works quite well on various kinds of multi-instance prediction problems.  相似文献   

3.
We present Searn, an algorithm for integrating search and learning to solve complex structured prediction problems such as those that occur in natural language, speech, computational biology, and vision. Searn is a meta-algorithm that transforms these complex problems into simple classification problems to which any binary classifier may be applied. Unlike current algorithms for structured learning that require decomposition of both the loss function and the feature functions over the predicted structure, Searn is able to learn prediction functions for any loss function and any class of features. Moreover, Searn comes with a strong, natural theoretical guarantee: good performance on the derived classification problems implies good performance on the structured prediction problem.  相似文献   

4.
The software model checker Blast   总被引:2,自引:0,他引:2  
Blast is an automatic verification tool for checking temporal safety properties of C programs. Given a C program and a temporal safety property, Blast either statically proves that the program satisfies the safety property, or provides an execution path that exhibits a violation of the property (or, since the problem is undecidable, does not terminate). Blast constructs, explores, and refines abstractions of the program state space based on lazy predicate abstraction and interpolation-based predicate discovery. This paper gives an introduction to Blast and demonstrates, through two case studies, how it can be applied to program verification and test-case generation. In the first case study, we use Blast to statically prove memory safety for C programs. We use CCured, a type-based memory-safety analyzer, to annotate a program with run-time assertions that check for safe memory operations. Then, we use Blast to remove as many of the run-time checks as possible (by proving that these checks never fail), and to generate execution scenarios that violate the assertions for the remaining run-time checks. In our second case study, we use Blast to automatically generate test suites that guarantee full coverage with respect to a given predicate. Given a C program and a target predicate p, Blast determines the program locations q for which there exists a program execution that reaches q with p true, and automatically generates a set of test vectors that cause such executions. Our experiments show that Blast can provide automated, precise, and scalable analysis for C programs.  相似文献   

5.
6.
Given a graph with a source and a sink node, the NP-hard maximum k-splittable s,t-flow (M k SF) problem is to find a flow of maximum value from s to t with a flow decomposition using at most k paths. The multicommodity variant of this problem is a natural generalization of disjoint paths and unsplittable flow problems. Constructing a k-splittable flow requires two interdepending decisions. One has to decide on k paths (routing) and on the flow values for the paths (packing). We give efficient algorithms for computing exact and approximate solutions by decoupling the two decisions into a first packing step and a second routing step. Usually the routing is considered before the packing. Our main contributions are as follows: (i) We show that for constant k a polynomial number of packing alternatives containing at least one packing used by an optimal M k SF solution can be constructed in polynomial time. If k is part of the input, we obtain a slightly weaker result. In this case we can guarantee that, for any fixed ε>0, the computed set of alternatives contains a packing used by a (1−ε)-approximate solution. The latter result is based on the observation that (1−ε)-approximate flows only require constantly many different flow values. We believe that this observation is of interest in its own right. (ii) Based on (i), we prove that, for constant k, the M k SF problem can be solved in polynomial time on graphs of bounded treewidth. If k is part of the input, this problem is still NP-hard and we present a polynomial time approximation scheme for it.  相似文献   

7.
Dynamic load balancing is an important technique when developing applications with unpredictable load distribution on distributed memory multicomputers. A tool, Dynamo, that can be used to utilize dynamic load balancing is presented. This tool separates the application from the load balancer and thus makes it possible to easily exchange the load balancer of a given application and experiment with different load balancing strategies. A prototype of Dynamo has been implemented in the C language on an Intel iPSC/2 Hypercube. Dynamo is demonstrated by two example programs. The first program solves the N queen problem using a backtracking algorithm and the second solves a 0-1 knapsack problem using a depth-first branch and bound algorithm.  相似文献   

8.
Component-based software development is a promising approach for controlling the complexity and quality of software systems. Nevertheless, recent advances in quality control techniques do not seem to keep up with the growing complexity of embedded software; embedded systems often consist of dozens to hundreds of software/hardware components that exhibit complex interaction behavior. Unanticipated quality defects in a component can be a major source of system failure. To address this issue, this paper suggests a design verification approach integrated into the model-driven, component-based development methodology Marmot. The notion of abstract components—the basic building blocks of Marmot—helps to lift the level of abstraction, facilitates high-level reuse, and reduces verification complexity by localizing verification problems between abstract components before refinement and after refinement. This enables the identification of unanticipated design errors in the early stages of development. This work introduces the Marmot methodology, presents a design verification approach in Marmot, and demonstrates its application on the development of a μ-controller-based abstraction of a car mirror control system. An application on TinyOS shows that the approach helps to reuse models as well as their verification results in the development process.  相似文献   

9.
We refine the complexity analysis of approximation problems by relating it to a new parameter calledgap location. Many of the results obtained so far for approximations yield satisfactory analysis with respect to this refined parameter, but some known results (e.g.,max-k-colorability, max 3-dimensional matching andmax not-all-equal 3sat) fall short of doing so. As a second contribution, our work fills the gap in these cases by presenting new reductions.Next, we present definitions and hardness results of new approximation versions of some NP-complete optimization problems. The problems we treat arevertex cover (for which we define a different optimization problem from the one treated in Papadimitriou & Yannakakis 1991),k-edge coloring, andset splitting.  相似文献   

10.
L. I. Manolache  D. G. Kourie 《Software》2001,31(13):1211-1236
A strategy described as ‘testing using M model programs’ (abbreviated to ‘M‐mp testing’) is investigated as a practical alternative to software testing based on manual outcome prediction. A model program implements suitably selected parts of the functional specification of the software to be tested. The M‐mp testing strategy requires that M (M ≥ 1) model programs as well as the program under test, P, should be independently developed. P and the M model programs are then subjected to the same test data. Difference analysis is conducted on the outputs and appropriate corrective action is taken. P and the M model programs jointly constitute an approximate test oracle. Both M‐mp testing and manual outcome prediction are subject to the possibility of correlated failure. In general, the suitability of M‐mp testing in a given context will depend on whether building and maintaining model programs is likely to be more cost effective than manually pre‐calculating P's expected outcomes for given test data. In many contexts, M‐mp testing could also facilitate the attainment of higher test adequacy levels than would be possible with manual outcome prediction. A rigorous experiment in an industrial context is described in which M‐mp testing (with M = 1) was used to test algorithmically complex scheduling software. In this case, M‐mp testing turned out to be significantly more cost effective than testing based on manual outcome prediction. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

11.
Curare, the program restructurer described in this paper automatically transforms a sequential Lisp program into an equivalent concurrent program that runs on a multiprocessor.Data dependences constrain the program's concurrent execution because, in general, two conflicting statements cannot execute in a different order without affecting the program's result. Not all dependences are essential to produce the program's result.Curare attempts to transform the program so it computes its result with fewer conflicts. An optimized program will execute with less synchronization and more concurrency. Curare then examines loops in a program to find those that are unconstrained or lightly constrained by dependences. By necessity,Curare treats recursive functions as loops and does not limit itself to explicit program loops. Recursive functions offer several advantages over explicit loops since they provide a convenient framework for inserting locks and handling the dynamic behavior of symbolic programs. Loops that are suitable for concurrent execution are changed to execute on a set of concurrent server processes. These servers execute single loop iterations and therefore need to be extremely inexpensive to invoke.Restructured programs execute significantly faster than the original sequential programs. This improvement is large enough to attract programmers to a multiprocessor, particularly since it requires little effort on their part.This research was funded by DARPA contract numbers N00039-85-C-0269 (SPUR) and N00039-84-C-0089 (XCS) and by an NSF Presidential Young Investigator award to Paul N. Hilfinger. Additional funding came from the California MICRO program (in conjunction with Texas Instruments, Xerox, Honeywell, and Phillips/Signetics).  相似文献   

12.
The last decade progresses have led the Satisfiability Problem (sat) to be a great and competitive practical approach to solve a wide range of industrial and academic problems. Thanks to these progresses, the size and difficulty of the sat instances has grown significantly. Among the recent solvers, a few are parallel and most of them use the message passing paradigm. In a previous work by Vander-Swalmen et al. (IWOMP, 146–157, 2008), we presented a fine grain parallel sat solver designed for shared memory using OpenMP and named mtss, for Multi Threaded Sat Solver. mtss extends the “guiding path” notion and uses a collaborative approach where a rich thread is in charge of the search-tree evaluation and where a set of poor threads yield logical or heuristics information to simplify the rich task. In this paper, we extend the poor thread abilities of mtss and present extensive comparative results on random 3-sat problems. These new experimentations show that fine grained techniques associated to poor tasks within the framework of mtss can achieve very interesting speedup on multi-core processors.  相似文献   

13.
14.
The Design of Discrimination Experiments   总被引:1,自引:0,他引:1  
Experimentation plays a fundamental role in scientific discovery. Scientists experiment to gather data, investigate phenomena, measure quantities, and test theories. In this article, we address the problem of designing experiments to discriminate between two completing theories. Given an initial situation for which the two theories make the same prediction, the experiment design problem is to determine how to modify the situation such that the two theories make different predictions for the modified situation. The modified situation is called a discrimination experiment. We present a knowledge-intensive method called DEED for designing discrimination experiments. The method analyzes the differences in the two theories' explanations of the prediction for the initial situation. Based on this analysis, it determines modifications to the initial situation that will result in a discrimination experiment. We illustrate the method with the design of experiments to discriminate between several pairs of qualitative theories in the fluids domain.  相似文献   

15.
Summary The order in which the variables are tested in a backtrack program can have a major effect on its running time. The best search order usually varies among the branches of the backtrack tree, so the number of possible search orders can be astronomical. We present an algorithm that chooses a search order dynamically by investigating all possibilities for k levels below the current level, extending beyond k levels wherever possible by setting the variables that have unique forced values. The algorithm takes time O(n k+1) to process a node. For k=2 and binary variables the analysis for selecting the next variable to introduce into the backtrack tree makes complete use of the information contained in the two-level investigations. For larger k or variables of higher degree there is no polynomial-time algorithm that makes complete use of the k-level investigations to limit searching (unless P=NP). The search rearrangement algorithm is closely related to constraint propagation. Experimental studies on conjunctive normal form predicates confirm that 1-level search rearrangement saves a great deal of time compared to 0-level (ordinary backtracking), and show that 2-level saves time over 1-level on large problems. For such problems with 256 variables 2-level is better than 1-level by a factor of two.  相似文献   

16.
The paper compares two popular strategies for solving propositional satisfiability, backtracking search and resolution, and analyzes the complexity of a directional resolution algorithm (DR) as a function of the width (w *) of the problem"s graph. Our empirical evaluation confirms theoretical prediction, showing that on low-w * problems DR is very efficient, greatly outperforming the backtracking-based Davis–Putnam–Logemann–Loveland procedure (DP). We also emphasize the knowledge-compilation properties of DR and extend it to a tree-clustering algorithm that facilitates query answering. Finally, we propose two hybrid algorithms that combine the advantages of both DR and DP. These algorithms use control parameters that bound the complexity of resolution and allow time/space trade-offs that can be adjusted to the problem structure and to the user"s computational resources. Empirical studies demonstrate the advantages of such hybrid schemes.  相似文献   

17.
In this work, we present a semantic query optimization approach to improve the efficiency of the evaluation of a subset of SQL:1999 recursive queries. Using datalog notation, we can state our main contribution as an algorithm that builds a program P′ equivalent to a given program P, when both are applied over a database d satisfying a set of functional dependencies. The input program P is a linear recursive datalog program. The new program P′ has less different variables and, sometimes, less atoms in rules, thus it is cheaper to evaluate. Using coral and ibm db2, P′ is empirically shown to be more efficient than the original program.This work is partially supported by Xunta de Galicia grant PGIDIT05SIN10502PR and Ministerio de Educación y Ciencia (PGE y FEDER) grants TIC2003-06593 and TIN2006-15071-C03-03.  相似文献   

18.
19.
We present an extension module for the Dune system. This module, called dune-subgrid, allows to mark elements of another Dune hierarchical grid. The set of marked elements can then be accessed as a Dune grid in its own right. dune-subgrid is free software and is available for download (External Dune Modules: ). We describe the functionality and use of dune-subgrid, comment on its implementation, and give two example applications. First, we show how dune-subgrid can be used for micro-FE simulations of trabecular bone. Then we present an algorithm that allows to use exact residuals for the adaptive solution of the spatial problems of time-discretized evolution equations.  相似文献   

20.
This paper presents a mathematical theory underlying a systematic method for constructingProlog programs calledstepwise enhancement. Stepwise enhancement dictates building a program starting with askeleton program which constitutes the basic control flow for the problem to be solved, and adding extra computations to the skeleton program by using well-understood programming techniques. Each extra computation can be developed independently, and the separate enhancements combined to produce the final program. While intuition and motivation have focused onProlog, the methods are applicable to logic programming languages more generally. The central concept in our mathematical theory for stepwise enhancement is that of a program map between two logic programs. Our definition of a program map from an enhancement to its skeleton guarantees the lifting of computations, the essence of the enhancement methodology. In this paper, we give definitions of program map and extensions, show that the definitions preserve the property of computations lifting, give examples of extensions and programming techniques which generate them, and point to directions for future work.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号