首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 906 毫秒
1.
A planning and scheduling (P&S) system takes as input a domain model and a goal, and produces a plan of actions to be executed, which will achieve the goal. A P&S system typically also offers plan execution and monitoring engines. Due to the non-deterministic nature of planning problems, it is a challenge to construct correct and reliable P&S systems, including, for example, declarative domain models. Verification and validation (V&V) techniques have been applied to address these issues. Furthermore, V&V systems have been applied to actually perform planning, and conversely, P&S systems have been applied to perform V&V of more traditional software. This article overviews some of the literature on the fruitful interaction between V&V and P&S.  相似文献   

2.
A DIN Kernel LISP Draft (DKLisp) has been developed by DIN as Reaction to Action D1 (N79), short term goal, of ISO WG16. It defines a subset language, as compatible as possible with the ANSICommon-Lisp draft, but also with theEuLisp draft. It combines the most important LISP main stream features in a single, compact, but nevertheless complete language definition, which thereby could be well suited as basis for a short term InternationalLisp Standard. Besides the functional and knowledge processing features, the expressive power of the language is well comparable with contemporary procedural languages, as e.g. C++ (of course without libraries). Important features ofDKLisp are:
  • to be a “Lisp-1,” but allowing an easy “Lisp-2” transformation;
  • to be a simple, powerful and standardized educationalLisp;
  • to omit all features, which are unclean or in heavy discussion;
  • DKLisp programs run nearly unchanged inCommon-Lisp;
  • DKLisp contains a simple object and package system;
  • DKLisp contains those data classes and control structures also common to most modernLisp and non-Lisp languages;
  • DKLisp offers a simple stream I/O;
  • DKLisp contains a clean unified hierarchical class/type system;
  • DKLisp contains the typical “Lisp-features” in an orthogonal way;
  • DKLisp allows and encourages really small but powerful implementations;
  • DKLisp comes in levels, so allowing ANSICommon-Lisp to be an extension ofDKLisp level-1.
  • The present is the second version of the proposal, namely version 1.2, with slight differences with respect to the one sent to ISO. Sources of such changes were the remarks generously sent by many readers of the previous attempt.  相似文献   

    3.
    4.
    In this article, we formulate and study quantum analogues of randomized search heuristics, which make use of Grover search (in Proceedings of the 28th Annual ACM Symposium on Theory of Computing, pp. 212–219. ACM, New York, 1996) to accelerate the search for improved offsprings. We then specialize the above formulation to two specific search heuristics: Random Local Search and the (1+1) Evolutionary Algorithm. We call the resulting quantum versions of these search heuristics Quantum Local Search and the (1+1) Quantum Evolutionary Algorithm. We conduct a rigorous runtime analysis of these quantum search heuristics in the computation model of quantum algorithms, which, besides classical computation steps, also permits those unique to quantum computing devices. To this end, we study the six elementary pseudo-Boolean optimization problems OneMax, LeadingOnes, Discrepancy, Needle, Jump, and TinyTrap. It turns out that the advantage of the respective quantum search heuristic over its classical counterpart varies with the problem structure and ranges from no speedup at all for the problem Discrepancy to exponential speedup for the problem TinyTrap. We show that these runtime behaviors are closely linked to the probabilities of performing successful mutations in the classical algorithms.  相似文献   

    5.
    Graphs appear in numerous applications including cyber security, the Internet, social networks, protein networks, recommendation systems, citation networks, and many more. Graphs with millions or even billions of nodes and edges are common-place. How to store such large graphs efficiently? What are the core operations/queries on those graph? How to answer the graph queries quickly? We propose Gbase, an efficient analysis platform for large graphs. The key novelties lie in (1) our storage and compression scheme for a parallel, distributed settings and (2) the carefully chosen graph operations and their efficient implementations. We designed and implemented an instance of Gbase using Mapreduce/Hadoop. Gbase provides a parallel indexing mechanism for graph operations that both saves storage space, as well as accelerates query responses. We run numerous experiments on real and synthetic graphs, spanning billions of nodes and edges, and we show that our proposed Gbase is indeed fast, scalable, and nimble, with significant savings in space and time.  相似文献   

    6.
    The Mizar Mathematical Library is one of the largest libraries of formalized and mechanically verified mathematics. Its language is highly optimized for authoring by humans. As in natural languages, the meaning of an expression is influenced by its (mathematical) context in a way that is natural to humans, but harder to specify for machine manipulation. Thus its custom file format can make the access to the library difficult. Indeed, the Mizar system itself is currently the only system that can fully operate on the Mizar library. This paper presents a translation of the Mizar library into the OMDoc format (Open Mathematical Documents), an XML-based representation format for mathematical knowledge. OMDoc is geared towards machine support and interoperability by making formula structure and context dependencies explicit. Thus, the Mizar library becomes accessible for a wide range of OMDoc-based tools for formal mathematics and knowledge management. We exemplify interoperability by indexing the translated library in the MathWebSearch engine, which provides an “applicable theorem search” service (almost) out of the box.  相似文献   

    7.
    In this paper, we present a comprehensive overview of the property-based monitoring framework for analog and mixed-signal systems. Our monitoring approach is centered around the Signal Temporal Logic (Stl) specification language, and is implemented in a stand-alone monitoring tool (Amt). We apply this property-based methodology to two industrial case studies and briefly present some recent extensions of Stl that were motivated by practical needs of analog designers.  相似文献   

    8.
    A typestate property describes which operations are available on an object or a group of inter-related objects, depending on this object??s or group??s internal state, the typestate. Researchers in the field of static analysis have devised static program analyses to prove the absence of typestate-property violations on all possible executions of a given program under test. Researchers in runtime verification, on the other hand, have developed powerful monitoring approaches that guarantee to capture property violations on actual executions. Although static analysis can greatly benefit runtime monitoring, up until now, most static analyses are incompatible with most monitoring tools. We present Clara, a novel framework that makes these approaches compatible. With Clara, researchers in static analysis can easily implement powerful typestate analyses. Runtime-verification researchers, on the other hand, can use Clara to specialize AspectJ-based runtime monitors to a particular target program. To make aspects compatible to Clara, the monitoring tool annotates them with so-called dependency state machines. Clara uses the static analyses to automatically convert an annotated monitoring aspect into a residual runtime monitor that is triggered by fewer program locations. If the static analysis succeeds on all locations, this proves that the program fulfills the stated typestate properties, making runtime monitoring entirely obsolete. If not, the residual runtime monitor is at least optimized. We instantiated Clara with three static typestate analyses and applied these analyses to monitoring aspects generated from tracematches. In two-thirds of the cases in our experiments, the static analysis succeeds on all locations, proving that the program fulfills the stated properties, and completely obviating the need for runtime monitoring. In the remaining cases, the runtime monitor is often significantly optimized.  相似文献   

    9.
    The uml Profile for Modeling and Analysis of Real-Time and Embedded (RTE) systems has recently been adopted by the OMG. Its Time Model extends the informal and simplistic Simple Time package proposed by Unified Modeling Language (UML2) and offers a broad range of capabilities required to model RTE systems including discrete/dense and chronometric/logical time. The Marte specification introduces a Time Structure inspired from several time models of the concurrency theory and proposes a new clock constraint specification language (ccsl) to specify, within the context of the uml, logical and chronometric time constraints. A semantic model in ccsl is attached to a (uml) model to give its timed causality semantics. In that sense, ccsl is comparable to the Ptolemy environment, in which directors give the semantics to models according to predefined models of computation and communication. This paper focuses on one historical model of computation of Ptolemy [Synchronous Data Flow (SDF)] and shows how to build SDF graphs by combining uml models and ccsl.  相似文献   

    10.
    We present the InterAspect instrumentation framework for GCC, a widely used compiler infrastructure. The addition of plug-in support in the latest release of GCC makes it an attractive platform for runtime instrumentation, as GCC plug-ins can directly add instrumentation by transforming the compiler??s intermediate representation. Such transformations, however, require expert knowledge of GCC internals. InterAspect addresses this situation by allowing instrumentation plug-ins to be developed using the familiar vocabulary of Aspect-Oriented Programming: pointcuts, join points, and advice functions. Moreover, InterAspect uses specific information about each join point in a pointcut, possibly including results of static analysis, to support powerful customized instrumentation. We describe the InterAspect API and present several examples that illustrate its practical utility as a runtime-verification platform. We also introduce a tracecut system that uses InterAspect to construct program monitors that are formally specified as regular expressions.  相似文献   

    11.
    In this paper we present Caesar, an intelligent domestic service robot. In domestic settings for service robots complex tasks have to be accomplished. Those tasks benefit from deliberation, from robust action execution and from flexible methods for human?Crobot interaction that account for qualitative notions used in natural language as well as human fallibility. Our robot Caesar deploys AI techniques on several levels of its system architecture. On the low-level side, system modules for localization or navigation make, for instance, use of path-planning methods, heuristic search, and Bayesian filters. For face recognition and human?Cmachine interaction, random trees and well-known methods from natural language processing are deployed. For deliberation, we use the robot programming and plan language Readylog, which was developed for the high-level control of agents and robots; it allows combining programming the behaviour using planning to find a course of action. Readylog is a variant of the robot programming language Golog. We extended Readylog to be able to cope with qualitative notions of space frequently used by humans, such as ??near?? and ??far??. This facilitates human?Crobot interaction by bridging the gap between human natural language and the numerical values needed by the robot. Further, we use Readylog to increase the flexible interpretation of human commands with decision-theoretic planning. We give an overview of the different methods deployed in Caesar and show the applicability of a system equipped with these AI techniques in domestic service robotics.  相似文献   

    12.
    In this paper, we present Para Miner which is a generic and parallel algorithm for closed pattern mining. Para Miner is built on the principles of pattern enumeration in strongly accessible set systems. Its efficiency is due to a novel dataset reduction technique (that we call EL-reduction), combined with novel technique for performing dataset reduction in a parallel execution on a multi-core architecture. We illustrate Para Miner’s genericity by using this algorithm to solve three different pattern mining problems: the frequent itemset mining problem, the mining frequent connected relational graphs problem and the mining gradual itemsets problem. In this paper, we prove the soundness and the completeness of Para Miner. Furthermore, our experiments show that despite being a generic algorithm, Para Miner can compete with specialized state of the art algorithms designed for the pattern mining problems mentioned above. Besides, for the particular problem of gradual itemset mining, Para Miner outperforms the state of the art algorithm by two orders of magnitude.  相似文献   

    13.
    We propose an effective procedure, the first one to our knowledge, for translating a proof term of the Calculus of Inductive Constructions (CIC), into a tactical expression of the high-level specification language of a CIC-based proof assistant like coq (Coq development team 2008) or matita (Asperti et al., J Autom Reason 39:109–139, 2007). This procedure, which should not be considered definitive at its present stage, is intended for translating the logical representation of a proof coming from any source, i.e. from a digital library or from another proof development system, into an equivalent proof presented in the proof assistant’s editable high-level format. To testify to effectiveness of our procedure, we report on its implementation in matita and on the translation of a significant set of proofs (Guidi, ACM Trans Comput Log 2009) from their logical representation as coq 7.3.1 (Coq development team 2002) CIC proof terms to their high-level representation as tactical expressions of matita’s user interface language.  相似文献   

    14.
    The Frequency Assignment Problem (fap) is one of the key issues in the design of Global System for Mobile Communications (gsm) networks. The formulation of the fap used here focuses on aspects that are relevant to real gsm networks. In this paper, we adapt a parallel model to tackle a multiobjectivised version of the fap. It is a hybrid model which combines an island-based model and a hyperheuristic. The main aim of this paper is to design a strategy that facilitates the application of the current best-behaved algorithm. Specifically, our goal is to decrease the user effort required to set its parameters. At the same time, the usage of such an algorithm in parallel environments was enabled. As a result, the time required to attain high-quality solutions was decreased. We also conduct a robustness analysis of this parallel model. In this analysis we study the relationship between the migration stage of the parallel model and the quality of the resulting solutions. In addition, we also carry out a scalability study of the parallel model. In this case, we analyse the impact that the migration stage has on the scalability of the entire parallel model. Computational results with several real network instances have validated our proposed approach. The best-known frequency plans for two real-world network instances are improved with this strategy.  相似文献   

    15.
    The theory of hybrid systems is well-established as a model for real-world systems consisting of continuous behaviour and discrete control. In practice, the behaviour of such systems is also subject to uncertainties, such as measurement errors, or is controlled by randomised algorithms. These aspects can be modelled and analysed using stochastic hybrid systems. In this paper, we present HModest, an extension to the Modest modelling language—which is originally designed for stochastic timed systems without complex continuous aspects—that adds differential equations and inclusions as an expressive way to describe the continuous system evolution. Modest is a high-level language inspired by classical process algebras, thus compositional modelling is an integral feature. We define the syntax and semantics of HModest and show that it is a conservative extension of Modest that retains the compositional modelling approach. To allow the analysis of HModest models, we report on the implementation of a connection to recently developed tools for the safety verification of stochastic hybrid systems, and illustrate the language and the tool support with a set of small, but instructive case studies.  相似文献   

    16.
    17.
    Despite a large body of work on XPath query processing in relational environment, systematic study of queries containing not-predicates have received little attention in the literature. Particularly, several xml supports of industrial-strength commercial rdbms fail to efficiently evaluate such queries. In this paper, we present an efficient and novel strategy to evaluate not -twig queries in a tree-unaware relational environment. not -twig queries are XPath queries with ancestor–descendant and parent–child axis and contain one or more not-predicates. We propose a novel Dewey-based encoding scheme called Andes (ANcestor Dewey-based Encoding Scheme), which enables us to efficiently filter out elements satisfying a not-predicate by comparing their ancestor group identifiers. In this approach, a set of elements under the same common ancestor at a specific level in the xml tree is assigned same ancestor group identifier. Based on this scheme, we propose a novel sql translation algorithm for not-twig query evaluation. Experiments carried out confirm that our proposed approach built on top of an off-the-shelf commercial rdbms significantly outperforms state-of-the-art relational and native approaches. We also explore the query plans selected by a commercial relational optimizer to evaluate our translated queries in different input cardinality. Such exploration further validates the performance benefits of Andes.  相似文献   

    18.
    Service Oriented Computing is a paradigm for developing software systems as the composition of a number of services. Services are loosely coupled entities, that can be dynamically published, discovered and invoked over a network. The engineering of such systems presents novel challenges, mostly due to the dynamicity and distributed nature of service-based applications. In this paper, we focus on the modelling of service orchestrations. We discuss the relationship between two languages developed under the Sensoria project: SRML as a high level modelling language for Service Oriented Architectures, and StPowla as a process-oriented orchestration approach that separates core business processes from system variability at the end-user’s level, where the focus is towards achieving business goals. A fundamental challenge of software engineering is to correctly align business goals with IT strategy, and as such we present an encoding of StPowla to SRML. This provides a formal framework for StPowla and also a separated view of policies representing system variability that is not present in SRML.  相似文献   

    19.
    Scientific applications are getting increasingly complex, e.g., to improve their accuracy by taking into account more phenomena. Meanwhile, computing infrastructures are continuing their fast evolution. Thus, software engineering is becoming a major issue to offer ease of development, portability and maintainability while achieving high performance. Component based software engineering offers a promising approach that enables the manipulation of the software architecture of applications. However, existing models do not provide an adequate support for performance portability of HPC applications. This paper proposes a low level component model (L \(^2\) C) that supports inter-component interactions for typical scenarios of high performance computing, such as process-local shared memory and function invocation (C++ and Fortran), MPI, and Corba. To study the benefits of using L \(^2\) C, this paper walks through an example of stencil computation, i.e. a structured mesh Jacobi implementation of the 2D heat equation parallelized through domain decomposition. The experimental results obtained on the Grid’5000 testbed and on the Curie supercomputer show that L \(^2\) C can achieve performance similar to that of native implementations, while easing performance portability.  相似文献   

    20.
    Reasoning about the termination of equational programs in sophisticated equational languages such as Elan, Maude, OBJ, CafeOBJ, Haskell, and so on, requires support for advanced features such as evaluation strategies, rewriting modulo, use of extra variables in conditions, partiality, and expressive type systems (possibly including polymorphism and higher-order). However, many of those features are, at best, only partially supported by current term rewriting termination tools (for instance mu-term, C i ME, AProVE, TTT, Termptation, etc.) while they may be essential to ensure termination. We present a sequence of theory transformations that can be used to bridge the gap between expressive membership equational programs and such termination tools, and prove the correctness of such transformations. We also discuss a prototype tool performing the transformations on Maude equational programs and sending the resulting transformed theories to some of the aforementioned standard termination tools.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号