首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A technique is presented that brings logical variables into the scope of the well-known Turner method for normal order evaluation of functional programs by S, K, I combinator graph reduction. This extension is illustrated bySASL+LV, an extension of Turner's languageSASL in which arbitrary expressions serve as formal parameters, and parameter passage is done by unification. The conceptual and practical advantages of such an extension are discussed, as well as semantic pitfalls that arise from the attendant weakening of referential transparency. Only five new combinators (LV, BV, FN, FB and UNIFY) are introduced. The resulting object code is fully upward compatible in the sense that previously compiledSASL programs remain executable with unchanged semantics. However,read-only variable usage inSASL+LV programs requires amultitasking extension of the customary stack-based evaluation method. Mechanisms are presented for managing this multitasking on both single and multiprocessor systems. Finally, directions are mentioned for applying this technique to implementations involving larger granularity combinators, and fuller semantic treatment of logical variables (e.g. accommodation of failing unifications).Research was supported in part by the Marcus Wallenberg Foundation.Research supported in part by grant CCR-8704778 from the National Science Foundation, and by an unrestricted gift from Telefonaktiebolaget LM Ericsson, Stockholm.  相似文献   

2.
A Boolean function in disjunctive normal form (DNF) is aHorn function if each of its elementary conjunctions involves at most one complemented variable. Ageneralized Horn function is constructed from a Horn function by disjuncting a nested set of complemented variables to it. The satisfiability problem is solvable in polynomial time for both Horn and generalized Horn functions. A Boolean function in DNF is said to berenamable Horn if it is Horn after complementation of some variables. Succinct mathematical characterizations and linear-time algorithms for recognizing renamable Horn and generalized Horn functions are given in this paper. The algorithm for recognizing renamable Horn functions gives a new method to test 2-SAT. Some computational results are also given.The authors were supported in part by the Office of Naval Research under University Research Initiative grant number N00014-86-K-0689. Chandru was also supported by NSF grant number DMC 88-07550.The authors gratefully acknowledge the partial support of NSF (Grant DMS 89-06870) and AFOSR (Grant 89-0066 and 89-0512).  相似文献   

3.
To solve a problem one may need to combine the knowledge of several different experts. It can happen that some of the claims of one or more experts may be in conflict with the claims of other experts. There may be several such points of conflict and any claim may be involved in several different such points of conflict. In that case, the user of the knowledge of experts may prefer a certain claim to another in one conflict-point without necessarily preferring that statement in another conflict-point.Our work constructs a framework within which the consequences of a set of such preferences (expressed as priorities among sets of statements) can be computed. We give four types of semantics for priorities, three of which are shown to be equivalent to one another. The fourth type of semantics for priorities is shown to be more cautious than the other three. In terms of these semantics for priorities, we give a function for combining knowledge from different sources such that the combined knowledge is conflict-free and satisfies all the priorities.Jack Minker and Shekhar Pradhan were supported in part by the National Science Foundation grant IRI-89-16059 and Air Force Office of Scientific Research grant 91-0350. V.S. Subrahmanian was supported in part by Army Research Office grant DAAL-03-92-G-0225, Air Force Office of Scientific Research Grant F49620-93-1-0065, and NSF grant IRI-9109755.  相似文献   

4.
Summary The single server M/G/1 queue subject to Poisson interruptions has many useful applications in computer systems modeling. The interruptions are usually characterized by their type of service-preemption discipline. This paper deals with this model in its most general setting, allowing the simultaneous presence of all types of interruptions that may be encountered in real systems. Inspite of the inherent complexity of the analysis, it is possible to derive analytic closed form expressions for interesting performance measures. The results obtained are of theoretical interest as well as of practical significance. In particular, we derive the Laplace Stieltjes transform of the completion time associated with a customer's ervice and obtain the steady-state average number of customers in the system. An application to the modeling of checkpointing and recovery in a transactional system is considered.This work was supported in part by Air Force Office of Scientific Research under grant AFOSR-84-0132, by the Army Research Office under contract DAAG29-84-K0045 and by the National Science Foundation under grant MCS-830200  相似文献   

5.
The Border Gateway Protocol (BGP) for interdomain routing is designed to allow autonomous systems (ASes) to express policy preferences over alternative routes. We model these preferences as arising from an AS’s underlying utility for each route and study the problem of finding a set of routes that maximizes the overall welfare (ie, the sum of all ASes’ utilities for their selected routes). We show that, if the utility functions are unrestricted, this problem is NP-hard even to approximate closely. We then study a natural class of restricted utilities that we call next-hop preferences. We present a strategyproof, polynomial-time computable mechanism for welfare-maximizing routing over this restricted domain. However, we show that, in contrast to earlier work on lowest-cost routing mechanism design, this mechanism appears to be incompatible with BGP and hence difficult to implement in the context of the current Internet. Our contributions include a new complexity measure for Internet algorithms, dynamic stability, which may be useful in other problem domains. Supported in part by ONR grant N00014-01-1-0795 and NSF grantITR-0219018.Supported by ONR grant N00014-01-1-0795 and NSF grant ITR-0219018. Most of this work was done while the author was at Yale University. Supported in part by NSF grants ITR-0121555 and ANI-0207399. This work was supported by the DoD University Research Initiative (URI) program administered by the Office of Naval Research under Grant N00014-01-1-0795. It was presented in preliminary form at the 2004 ACM Symposium on Principles of Distributed Computing [7]. Portions of this work appeared in preliminary form in the second author’s PhD Thesis [16].  相似文献   

6.
One of the most appealing features of constraint programming is its rich constraint language for expressing combinatorial optimization problems. This paper demonstrates that traditional combinators from constraint programming have natural counterparts for local search, although their underlying computational model is radically different. In particular, the paper shows that constraint combinators, such as logical and cardinality operators, reification, and first-class expressions can all be viewed as differentiable objects. These combinators naturally support elegant and efficient modelings, generic search procedures, and partial constraint satisfaction techniques for local search. Experimental results on a variety of applications demonstrate the expressiveness and the practicability of the combinators.  相似文献   

7.
This paper is concerned with improvement in optical image quality by image restoration. Image restoration is an ill-posed inverse problem which involves the removal or minimization of degradations caused by noise and blur in an image, resulting from, in this case, imaging through a medium. Our work here concerns the use of the underlying Toeplitz structure of such problems, and associated techniques for accelerating the convergence of iterative image restoration computations. Denoising methods, including total variation minimization, followed by segmentation-based preconditioning methods for minimum residual conjugate gradient iterations, are investigated. Regularization is accomplished by segmenting the image into (smooth) segments and varying the preconditioners across the segments. By taking advantage of the Toeplitz structure, our algorithms can be implemented with computational complexity of onlyO (ln 2 logn), wheren 2 is the number of pixels in the image andl is the number of segments used. Also, parallelization is straightforward. Numerical tests are reported for atmospheric imaging problems, including the case of spatially varying blur. Research supported in part by a National Science Foundation Postdoctoral Research Fellowship. Research sponsored by the U.S. Air Force Office of Scientific Research under grant F49620-97-1-1039. Research sponsored by the U.S. Air Force Office of Scientific Research under grant F49620-97-1-0139, and by the National Science Foundation under grant CCR-96-23356. Research sponsored by the National Science Foundation under grant CCR-96-23356.  相似文献   

8.
Generalizing the notion of function composition, we introduce the concept ofconditional function composition and present a theory of such compositions. We use the theory to describe the semantics of a programming language with exceptions, and to relate exceptions to theIF statement.Supported in part by Air Force Office of Scientific Research grant number 91-0070. Now at DEC Systems Research Center, Palo Alto, CA.  相似文献   

9.
Given a d-dimensional square mesh or square torus G and a c-dimensional square mesh or square torus H such that G and H are of the same size but may differ in dimensions and shapes, we study the problem of simulating in H parallel neighboring communications in G. We assume that the nodes in H have only unit-size buffers associated with the links, and that packets can be sent and received simultaneously from all outbound links and inbound links of the nodes. For permutation-type parallel neighboring communications, for all the combinations of graph types and graph shapes of G and H, except for the case in which d < c and c is not divisible by d, we show that H can simulate G either optimally or optimally up to a constant multiplicative factor for fixed values of d and c. For scatter-type parallel neighboring communications, for some special cases of G and H, we also show that H can optimally simulate G. All these simulation times are smaller than the diameter of H, the lower bound on the routing complexity to support general data permutations in H.This work has been partially supported by National Science Foundation PYI award DCR84-51408, IBM research grant, AT&T Information System research grant, National Science Foundation CER grant MCS82-19196, Army Research Office grant DAAG-29-84-K-0061, Canada NSERC research grant OGP0041648, and British Science and Engineering Research Council visiting fellowship research grant.  相似文献   

10.
Determining the identity and pose of oceluded objects from noisy data is a critical step in interacting intelligently with an unstructured environment. Previous work has shown that local measurements of position and surface orientation may be used in a constrained search process to solve this problem, for the case of rigid objects, either two-dimensional or three-dimensional. This paper considers the more general problem of recognizing and locating objects that can vary in parameterized ways. We consider two-dimensional objects with rotational, translational, or scaling degrees of freedom, and two-dimensional objects that undergo stretching transformations. We show that the constrained search method can be extended to handle the recognition and localization of such generalized classes of object families.This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's artificial intelligence research is provided in part by an Office of Naval Research University Research Initiative grant under contract N00014-86-K-0180, in part by the Advanced Research Projects Agency of the Department of Defense under Army contract number DACA76-85-C-0010, and in part by DARPA under. Office of Naval Research contract N00014-85-K-0124. A preliminary version of this work appeared in the proceedings of the First International Conference on Computer Vision, London, England, 1987.  相似文献   

11.
This paper presents an automated and compositional procedure to solve the substitutability problem in the context of evolving software systems. Our solution contributes two techniques for checking correctness of software upgrades: (1) a technique based on simultaneous use of over-and under-approximations obtained via existential and universal abstractions; (2) a dynamic assume-guarantee reasoning algorithm—previously generated component assumptions are reused and altered on-the-fly to prove or disprove the global safety properties on the updated system. When upgrades are found to be non-substitutable, our solution generates constructive feedback to developers showing how to improve the components. The substitutability approach has been implemented and validated in the ComFoRT reasoning framework, and we report encouraging results on an industrial benchmark. This is an extended version of a paper, Dynamic Component Substitutability Analysis, published in the Proceedings of the Formal Methods 2005 Conference, Lecture Notes in Computer Science, vol. 3582, by the same authors. This research was sponsored by the National Science Foundation under grant nos. CNS-0411152, CCF-0429120, CCR-0121547, and CCR-0098072, the Semiconductor Research Corporation under grant no. TJ-1366, the US Army Research Office under grant no. DAAD19-01-1-0485, the Office of Naval Research under grant no. N00014-01-1-0796, the ICAST project and the Predictable Assembly from Certifiable Components (PACC) initiative at the Software Engineering Institute, Carnegie Mellon University. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of any sponsoring institution, the US government or any other entity.  相似文献   

12.
This paper adds counterfactuals to the framework of knowledge-based programs of Fagin, Halpern, Moses, and Vardi [3,4]. The use of counterfactuals is illustrated by designing a protocol in which an agent stops sending messages once it knows that it is safe to do so. Such behavior is difficult to capture in the original framework because it involves reasoning about counterfactual executions, including ones that are not consistent with the protocol. Attempts to formalize these notions without counterfactuals are shown to lead to rather counterintuitive behavior.Received: 15 November 2001, Accepted: 15 April 2004, Published online: 13 July 2004Joseph Y. Halpern: Work supported in part by NSF under grant IRI-96-25901, IIS-0090145, and CTC-0208535, by the Air Force Office of Scientific Research under grant F49620-96-1-0323 and F48620-02-1-0101, and by ONR under grants N00014-00-1-03-41, N00014-01-1-0795, and by the DoD Multidisciplinary University Research Initiative (MURI) program administered by the ONR under grant N00014-01-1-0795.)A preliminary version of this paper appeared in the Proceedings of the Seventh Conference on Theoretical Aspects of Rationality and Knowledge (TARK), 1998.  相似文献   

13.
Summary In modern imperative languages there are two commonly occurring ways to activate concurrently running tasks,splitting (cobegin...coend) andspawning. The programming language Ada makes use of both forms of task activation. We present a formal system for verifying partial correctness specifications of Ada tasks activated by spawning. The system is based upon a view of tasks as histories of events. We show how the mindset of splitting may be applicable when developing a formal system for reasoning about spawning. The resultant proof system is compositional, and a robust extension of partial correctness proof systems for sequential constructs. A transition model is given for spawning, and the proof system is proven complete in the sense of Cook [10] relative to this model, under certain reasonable assumptions. The specific proof rules given apply to a subset of Ada without real-time and distributed termination. Our approach to task verification applies to other imperative languages besides Ada, and the essential parts of our methodology are applicable to other formal systems besides those based on partial correctness reasoning. Sigurd Meldal is professor of informatics at the University of Bergen. He is interested in techniques and tools based on formal methods for development of concurrent software. His current foci are the investigation of algebraic approaches to nondeterminism, and the participation in the design of a concurrent specification, prototyping and implementation language. The latter supplements formal proof with support for run time control of consistency between concurrent systems as specified and as implemented. Meldal received his cand. real. (1982) and dr. scient. (1986) degrees in informatics from the University of Oslo.This research was supported by a grant from the Norwegian Research Council for Science and the Humanities, by the Defense Advanced Research Projects Agency/Information Systems Technology Office under the office of Naval Research contract N00014-90-J1232, by the Air Force Office of Scientific Research under Grant AFOSR83-0255 and by a Fulbright Scholarship from the US Educational Foundation in Norway  相似文献   

14.
Support functions and samples of convex bodies in R n are studied with regard to conditions for their validity or consistency. Necessary and sufficient conditions for a function to be a support function are reviewed in a general setting. An apparently little known classical such result for the planar case due to Rademacher and based on a determinantal inequality is presented and a generalization to arbitrary dimensions is developed. These conditions are global in the sense that they involve values of the support function at widely separated points. The corresponding discrete problem of determining the validity of a set of samples of a support function is treated. Conditions similar to the continuous inequality results are given for the consistency of a set of discrete support observations. These conditions are in terms of a series of local inequality tests involving only neighboring support samples. Our results serve to generalize existing planar conditions to arbitrary dimensions by providing a generalization of the notion of nearest neighbor for plane vectors which utilizes a simple positive cone condition on the respective support sample normals.This work partially supported by the Center for Intelligent Control Systems under the U.S. Army Research Office Grant DAAL03-92-G-0115, the Office of Naval Research under Grant N00014-91-J-1004, and the National Science Foundation under Grant MIP-9015281.Partially supported by the National Science Foundation under grant IRI-9209577 and by the U.S. Army Research Office under grant DAAL03-92-G-0320  相似文献   

15.
A systematic transformation method based on incrementalization and value caching generalizes a broad family of program optimizations. It yields significant performance improvements in many program classes, including iterative schemes that characterize hardware specifications. CACHET is an interactive incrementalization tool. Although incrementalization is highly structured and automatable, better results are obtained through interaction, where the main task is to guide term rewriting based on data-specific identities. Incrementalization specialized to iteration corresponds to strength reduction, a familiar program improvement technique. This correspondence is illustrated by the derivation of a hardware-efficient nonrestoring square-root algorithm, which has also served as an example of theorem prover-based implementation verification. Published online: 9 October 2001 RID="*" ID="*"S.D. Johnson supported, in part, by the National Science Foundation under grant MIP-9601358. RID="**" ID="**"Y.A. Liu supported in part by the National Science Foundation under grant CCR-9711253, the Office of Naval Research under grant N00014-99-1-0132, and Motorola Inc. under a Motorola University Partnership in Research Grant. RID="***" ID="***"Y. Zhang is a student recipient of a Motorola University Partnership in Research Grant.  相似文献   

16.
In this paper we study the external memory planar point enclosure problem: Given N axis-parallel rectangles in the plane, construct a data structure on disk (an index) such that all K rectangles containing a query point can be reported I/O-efficiently. This problem has important applications in e.g. spatial and temporal databases, and is dual to the important and well-studied orthogonal range searching problem. Surprisingly, despite the fact that the problem can be solved optimally in internal memory with linear space and O(log N+K) query time, we show that one cannot construct a linear sized external memory point enclosure data structure that can be used to answer a query in O(log  B N+K/B) I/Os, where B is the disk block size. To obtain this bound, Ω(N/B 1−ε ) disk blocks are needed for some constant ε>0. With linear space, the best obtainable query bound is O(log 2 N+K/B) if a linear output term O(K/B) is desired. To show this we prove a general lower bound on the tradeoff between the size of the data structure and its query cost. We also develop a family of structures with matching space and query bounds. An extended abstract of this paper appeared in Proceedings of the 12th European Symposium on Algorithms (ESA’04), Bergen, Norway, September 2004, pp. 40–52. L. Arge’s research was supported in part by the National Science Foundation through RI grant EIA–9972879, CAREER grant CCR–9984099, ITR grant EIA–0112849, and U.S.-Germany Cooperative Research Program grant INT–0129182, as well as by the US Army Research Office through grant W911NF-04-01-0278, by an Ole Roemer Scholarship from the Danish National Science Research Council, a NABIIT grant from the Danish Strategic Research Council and by the Danish National Research Foundation. V. Samoladas’ research was supported in part by a grant co-funded by the European Social Fund and National Resources-EPEAEK II-PYTHAGORAS. K. Yi’s research was supported in part by the National Science Foundation through ITR grant EIA–0112849, U.S.-Germany Cooperative Research Program grant INT–0129182, and Hong Kong Direct Allocation Grant (DAG07/08).  相似文献   

17.
Research on traditional association rules has gained a great attention during the past decade. Generally, an association rule AB is used to predict that B likely occurs when A occurs. This is a kind of strong correlation, and indicates that the two events will probably happen simultaneously. However, in real world applications such as bioinformatics and medical research, there are many follow-up correlations between itemsets A and B, such as, B is likely to occur n times after A has occurred m times. That is, the correlative itemsets do not belong to the same transaction. We refer to this relation as a follow-up correlation pattern (FCP). The task of mining FCP patterns brings more challenges on efficient processing than normal pattern discovery because the number of potentially interesting patterns becomes extremely large as the length limit of transactions no longer exists. In this paper, we develop an efficient algorithm to identify FCP patterns in time-related databases. We also experimentally evaluate our approach, and provide extensive results on mining this new kind of patterns. This work is partially supported by Australian large ARC grants (DP0449535, DP0559536 and DP0667060), a China NSF major research Program (60496327), a China NSF grant (60463003), an Overseas Outstanding Talent Research Program of the Chinese Academy of Sciences (06S3011S01), an Overseas-Returning High-level Talent Research Program of China Hunan-Resource Ministry, and an Innovation Project of Guangxi Graduate Education (2006106020812M35).  相似文献   

18.
This paper presents an analytically robust, globally convergent approach to managing the use of approximation models of varying fidelity in optimization. By robust global behaviour we mean the mathematical assurance that the iterates produced by the optimization algorithm, started at an arbitrary initial iterate, will converge to a stationary point or local optimizer for the original problem. The approach presented is based on the trust region idea from nonlinear programming and is shown to be provably convergent to a solution of the original high-fidelity problem. The proposed method for managing approximations in engineering optimization suggests ways to decide when the fidelity, and thus the cost, of the approximations might be fruitfully increased or decreased in the course of the optimization iterations. The approach is quite general. We make no assumptions on the structure of the original problem, in particular, no assumptions of convexity and separability, and place only mild requirements on the approximations. The approximations used in the framework can be of any nature appropriate to an application; for instance, they can be represented by analyses, simulations, or simple algebraic models. This paper introduces the approach and outlines the convergence analysis.This research was supported by the Dept. of Energy grant DEFG03-95ER25257 and Air Force Office of Scientific Research grant F49620-95-1-0210This research was supported by the National Aeronautics and Space Administration under NASA Contract No. NAS1-19480 while the author was in residence at the Institute for Computer Applications in Science and Engineering (ICASE), NASA Langley Research Center, Hampton, VA 23681, USAThis research was supported by the Air Force Office of Scientific Research grant F49620-95-1-0210 and by the National Aeronautics and Space Administration under NASA Contract No. NAS1-19480 while the author was in residence at the Institute for Computer Applications in Science and Engineering (ICASE), NASA Langley Research Center, Hampton, VA 23681, USA  相似文献   

19.
The use of Gibbs random fields (GRF) to model images poses the important problem of the dependence of the patterns sampled from the Gibbs distribution on its parameters. Sudden changes in these patterns as the parameters are varied are known asphase transitions. In this paper we concentrate on developing a general deterministic theory for the study of phase transitions when a single parameter, namely, the temperature, is varied. This deterministic framework is based on a technique known as themean-field approximation, which is widely used in statistical physics. Our mean-field theory is general in that it is valid for any number of gray levels, any pairwise interaction potential, any neighborhood structure or size, and any set of constraints imposed on the desired images. The mean-field approximation is used to compute closed-form estimates of the critical temperatures at which phase transitions occur for two texture models widely used in the image modeling literature: the Potts model and the autobinomial model. The mean-field model allows us to gain insight into the Gibbs model behavior in the neighborhood of these temperatures. These analytical results are verified by computer simulations that use a novel mean-field descent algorithm. An important spinoff of our mean-field theory is that it allows us to compute approximations for the correlation functions of GRF models, thus bridging the gap between neighborhood-based and correlation-baseda priori image models.The work of I.M. Elfadel was supported in part by the National Science Foundation under grant MIP-91-17724. The work of A.L. Yuille was supported by the Brown, Harvard, and MIT Center for Intelligent Control Systems under U.S. Army Research Office grant DAAL03-86-C-0171, by the Defense Advanced Research Projects Agency under contract AFOSR-89-0506, and by the National Science Foundation under grant IRI-9003306.  相似文献   

20.
Morphological neural networks are based on a new paradigm for neural computing. Instead of adding the products of neural values and corresponding synaptic weights, the basic neural computation in a morphological neuron takes the maximum or minimum of the sums of neural values and their corresponding synaptic weights. By taking the maximum (or minimum) of sums instead of the sum of products, morphological neuron computation is nonlinear before thresholding. As a consequence, the properties of morphological neural networks are drastically different than those of traditional neural network models. In this paper we restrict our attention to morphological associative memories. After a brief review of morphological neural computing and a short discussion about the properties of morphological associative memories, we present new methodologies and associated theorems for retrieving complete stored patterns from noisy or incomplete patterns using morphological associative memories. These methodologies are derived from the notions of morphological independence, strong independence, minimal representations of patterns vectors, and kernels. Several examples are provided in order to illuminate these novel concepts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号