首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 803 毫秒
1.
The uml Profile for Modeling and Analysis of Real-Time and Embedded (RTE) systems has recently been adopted by the OMG. Its Time Model extends the informal and simplistic Simple Time package proposed by Unified Modeling Language (UML2) and offers a broad range of capabilities required to model RTE systems including discrete/dense and chronometric/logical time. The Marte specification introduces a Time Structure inspired from several time models of the concurrency theory and proposes a new clock constraint specification language (ccsl) to specify, within the context of the uml, logical and chronometric time constraints. A semantic model in ccsl is attached to a (uml) model to give its timed causality semantics. In that sense, ccsl is comparable to the Ptolemy environment, in which directors give the semantics to models according to predefined models of computation and communication. This paper focuses on one historical model of computation of Ptolemy [Synchronous Data Flow (SDF)] and shows how to build SDF graphs by combining uml models and ccsl.  相似文献   

2.
The AtMostSeqCard constraint is the conjunction of a cardinality constraint on a sequence of n variables and of n???q?+?1 constraints AtMost u on each subsequence of size q. This constraint is useful in car-sequencing and crew-rostering problems. In van Hoeve et al. (Constraints 14(2):273–292, 2009), two algorithms designed for the AmongSeq constraint were adapted to this constraint with an O(2 q n) and O(n 3) worst case time complexity, respectively. In Maher et al. (2008), another algorithm similarly adaptable to filter the AtMostSeqCard constraint with a time complexity of O(n 2) was proposed. In this paper, we introduce an algorithm for achieving arc consistency on the AtMostSeqCard constraint with an O(n) (hence optimal) worst case time complexity. Next, we show that this algorithm can be easily modified to achieve arc consistency on some extensions of this constraint. In particular, the conjunction of a set of m AtMostSeqCard constraints sharing the same scope can be filtered in O(nm). We then empirically study the efficiency of our propagator on instances of the car-sequencing and crew-rostering problems.  相似文献   

3.
This work presents a particle filter method closely related to Fastslam for solving the simultaneous localization and mapping (slam) problem. Using the standard Fastslam algorithm, only low-dimensional vehicle models can be handled due to computational constraints. In this work, an extra factorization of the problem is introduced that makes high-dimensional vehicle models computationally feasible. Results using experimental data from an unmanned aerial vehicle (helicopter) are presented. The proposed algorithm fuses measurements from on-board inertial sensors (accelerometer and gyro), barometer, and vision in order to solve the slam problem.  相似文献   

4.
S-Net is a declarative coordination language and component technology aimed at radically facilitating software engineering for modern parallel compute systems by near-complete separation of concerns between application (component) engineering and concurrency orchestration. S-Net builds on the concept of stream processing to structure networks of communicating asynchronous components implemented in a conventional (sequential) language. In this paper we present the design, implementation and evaluation of a new and innovative runtime system for S-Net streaming networks. The Front runtime system outperforms the existing implementations of S-Net by orders of magnitude for stress-test benchmarks, significantly reduces runtimes of fully-fledged parallel applications with compute-intensive components and achieves good scalability on our 48-core test system.  相似文献   

5.
The adoption of Artificial Neural Networks (ANNs) in safety-related applications is often avoided because it is difficult to rule out possible misbehaviors with traditional analytical or probabilistic techniques. In this paper we present NeVer, our tool for checking safety of ANNs. NeVer encodes the problem of verifying safety of ANNs into the problem of satisfying corresponding Boolean combinations of linear arithmetic constraints. We describe the main verification algorithm and the structure of NeVer. We present also empirical results confirming the effectiveness of NeVer on realistic case studies.  相似文献   

6.
Golomb rulers are special rulers where for any two marks it holds that the distance between them is unique. They find applications in radio frequency selection, radio astronomy, data encryption, communication networks, and bioinformatics. An important subproblem in constructing “compact” Golomb rulers is Golomb Subruler  (GSR), which asks whether it is possible to make a given ruler Golomb by removing at most \(k\) marks. We initiate a study of GSR from a parameterized complexity perspective. In particular, we consider a natural hypergraph characterization of rulers and investigate the construction and structure of the corresponding hypergraphs. We exploit their properties to derive polynomial-time data reduction rules that reduce a given instance of GSR to an equivalent one with \({{\mathrm{O}}}(k^3)\)  marks. Finally, we complement a recent computational complexity study of GSR by providing a simplified reduction that shows NP-hardness even when all integers are bounded by a polynomial in the input length.  相似文献   

7.
Due to significant advances in SAT technology in the last years, its use for solving constraint satisfaction problems has been gaining wide acceptance. Solvers for satisfiability modulo theories (SMT) generalize SAT solving by adding the ability to handle arithmetic and other theories. Although there are results pointing out the adequacy of SMT solvers for solving CSPs, there are no available tools to extensively explore such adequacy. For this reason, in this paper we introduce a tool for translating FLATZINC (MINIZINC intermediate code) instances of CSPs to the standard SMT-LIB language. We provide extensive performance comparisons between state-of-the-art SMT solvers and most of the available FLATZINC solvers on standard FLATZINC problems. The obtained results suggest that state-of-the-art SMT solvers can be effectively used to solve CSPs.  相似文献   

8.
In this paper, we study the relation among Answer Set Programming (ASP) systems from a computational point of view. We consider smodels, dlv, and cmodels ASP systems based on stable model semantics, the first two being native ASP systems and the last being a SAT-based system. We first show that smodels, dlv, and cmodels explore search trees with the same branching nodes (assuming, of course, a same branching heuristic) on the class of tight logic programs. Leveraging on the fact that SAT-based systems rely on the deeply studied Davis–Logemann–Loveland (dll) algorithm, we derive new complexity results for the ASP procedures. We also show that on nontight programs the SAT-based systems are computationally different from native procedures, and the latter have computational advantages. Moreover, we show that native procedures can guarantee the “correctness” of a reported solution when reaching the leaves of the search trees (i.e., no stability check is needed), while this is not the case for SAT-based procedures on nontight programs. A similar advantage holds for dlv in comparison with smodels if the “well-founded” operator is disabled and only Fitting’s operator is used for negative inferences. We finally study the “cost” of achieving such advantages and comment on to what extent the results presented extend to other systems.  相似文献   

9.
Self-organization of autonomous mobile nodes using bio-inspired algorithms in mobile ad hoc networks (manets) has been presented in earlier work of the authors. In this paper, the convergence speed of our force-based genetic algorithm (called fga) is provided through analysis using homogeneous Markov chains. The fga is run by each mobile node as a topology control mechanism to decide a corresponding node??s next speed and movement direction so that it guides an autonomous mobile node over an unknown geographical area to obtain a uniform node distribution while only using local information. The stochastic behavior of fga, like all ga-based approaches, makes it difficult to analyze the effects that various manet characteristics have on its convergence speed. Metrically transitive homogeneous Markov chains have been used to analyze the convergence of our fga with respect to various communication ranges of mobile nodes and also the number of nodes in various scenarios. The Dobrushin contraction coefficient of ergodicity is used for measuring convergence speed for Markov chain model of our fga. Two different testbed platforms are presented to illustrate effectiveness of our bio-inspired algorithm in terms of area coverage.  相似文献   

10.
A DIN Kernel LISP Draft (DKLisp) has been developed by DIN as Reaction to Action D1 (N79), short term goal, of ISO WG16. It defines a subset language, as compatible as possible with the ANSICommon-Lisp draft, but also with theEuLisp draft. It combines the most important LISP main stream features in a single, compact, but nevertheless complete language definition, which thereby could be well suited as basis for a short term InternationalLisp Standard. Besides the functional and knowledge processing features, the expressive power of the language is well comparable with contemporary procedural languages, as e.g. C++ (of course without libraries). Important features ofDKLisp are:
  • to be a “Lisp-1,” but allowing an easy “Lisp-2” transformation;
  • to be a simple, powerful and standardized educationalLisp;
  • to omit all features, which are unclean or in heavy discussion;
  • DKLisp programs run nearly unchanged inCommon-Lisp;
  • DKLisp contains a simple object and package system;
  • DKLisp contains those data classes and control structures also common to most modernLisp and non-Lisp languages;
  • DKLisp offers a simple stream I/O;
  • DKLisp contains a clean unified hierarchical class/type system;
  • DKLisp contains the typical “Lisp-features” in an orthogonal way;
  • DKLisp allows and encourages really small but powerful implementations;
  • DKLisp comes in levels, so allowing ANSICommon-Lisp to be an extension ofDKLisp level-1.
  • The present is the second version of the proposal, namely version 1.2, with slight differences with respect to the one sent to ISO. Sources of such changes were the remarks generously sent by many readers of the previous attempt.  相似文献   

    11.
    Overlay networks support a wide range of peer-to-peer media streaming applications on the Internet. The user experience of such applications is affected by the churn resilience of the system. When peers disconnect from the system, streamed data may be delayed or lost due to missing links in the overlay topology. In this paper, we explore a proactive strategy to create churn-aware overlay networks that reduce the potential of disruptions caused by churn events. We describe Chams, a middleware for constructing overlay networks that mitigates the impact of churn. Chams uses a ??hybrid?? approach??it implicitly defines an overlay topology using a gossip-style mechanism, while taking the reliability of peers into account. Unlike systems for overlay construction, Chams supports a variety of topologies used in media streaming systems, such as trees, multi-trees and forests. We evaluate Chams with different topologies and show that it reduces the impact of churn, while imposing only low computational and message overheads.  相似文献   

    12.
    Graphs appear in numerous applications including cyber security, the Internet, social networks, protein networks, recommendation systems, citation networks, and many more. Graphs with millions or even billions of nodes and edges are common-place. How to store such large graphs efficiently? What are the core operations/queries on those graph? How to answer the graph queries quickly? We propose Gbase, an efficient analysis platform for large graphs. The key novelties lie in (1) our storage and compression scheme for a parallel, distributed settings and (2) the carefully chosen graph operations and their efficient implementations. We designed and implemented an instance of Gbase using Mapreduce/Hadoop. Gbase provides a parallel indexing mechanism for graph operations that both saves storage space, as well as accelerates query responses. We run numerous experiments on real and synthetic graphs, spanning billions of nodes and edges, and we show that our proposed Gbase is indeed fast, scalable, and nimble, with significant savings in space and time.  相似文献   

    13.
    Reasoning about the termination of equational programs in sophisticated equational languages such as Elan, Maude, OBJ, CafeOBJ, Haskell, and so on, requires support for advanced features such as evaluation strategies, rewriting modulo, use of extra variables in conditions, partiality, and expressive type systems (possibly including polymorphism and higher-order). However, many of those features are, at best, only partially supported by current term rewriting termination tools (for instance mu-term, C i ME, AProVE, TTT, Termptation, etc.) while they may be essential to ensure termination. We present a sequence of theory transformations that can be used to bridge the gap between expressive membership equational programs and such termination tools, and prove the correctness of such transformations. We also discuss a prototype tool performing the transformations on Maude equational programs and sending the resulting transformed theories to some of the aforementioned standard termination tools.  相似文献   

    14.
    In a previous paper, we laid out the vision of a novel graph query processing paradigm where instead of processing a visual query graph after its construction, it interleaves visual query formulation and processing by exploiting the latency offered by the gui to filter irrelevant matches and prefetch partial query results [8]. Our recent attempts at implementing this vision [8, 9] show significant improvement in system response time (srt) for subgraph queries. However, these efforts are designed specifically for graph databases containing a large collection of small or medium-sized graphs. In this paper, we propose a novel algorithm called quble (QUery Blender for Large nEtworks) to realize this visual subgraph querying paradigm on very large networks (e.g., protein interaction networks, social networks). First, it decomposes a large network into a set of graphlets and supergraphlets using a minimum cut-based graph partitioning technique. Next, it mines approximate frequent and small infrequent fragments (sifs) from them and identifies their occurrences in these graphlets and supergraphlets. Then, the indexing framework of [9] is enhanced so that the mined fragments can be exploited to index graphlets for efficient blending of visual subgraph query formulation and query processing. Extensive experiments on large networks demonstrate effectiveness of quble.  相似文献   

    15.
    A circle graph is the intersection graph of a set of chords in a circle. Keil [Discrete Appl. Math., 42(1):51–63, 1993] proved that Dominating Set, Connected Dominating Set, and Total Dominating Set are NP-complete in circle graphs. To the best of our knowledge, nothing was known about the parameterized complexity of these problems in circle graphs. In this paper we prove the following results, which contribute in this direction:
    • Dominating Set, Independent Dominating Set, Connected Dominating Set, Total Dominating Set, and Acyclic Dominating Set are W[1]-hard in circle graphs, parameterized by the size of the solution.
    • Whereas both Connected Dominating Set and Acyclic Dominating Set are W[1]-hard in circle graphs, it turns out that Connected Acyclic Dominating Set is polynomial-time solvable in circle graphs.
    • If T is a given tree, deciding whether a circle graph G has a dominating set inducing a graph isomorphic to T is NP-complete when T is in the input, and FPT when parameterized by t=|V(T)|. We prove that the FPT algorithm runs in subexponential time, namely $2^{\mathcal{O}(t \cdot\frac{\log\log t}{\log t})} \cdot n^{\mathcal{O}(1)}$ , where n=|V(G)|.
      相似文献   

    16.
    17.
    The Frequency Assignment Problem (fap) is one of the key issues in the design of Global System for Mobile Communications (gsm) networks. The formulation of the fap used here focuses on aspects that are relevant to real gsm networks. In this paper, we adapt a parallel model to tackle a multiobjectivised version of the fap. It is a hybrid model which combines an island-based model and a hyperheuristic. The main aim of this paper is to design a strategy that facilitates the application of the current best-behaved algorithm. Specifically, our goal is to decrease the user effort required to set its parameters. At the same time, the usage of such an algorithm in parallel environments was enabled. As a result, the time required to attain high-quality solutions was decreased. We also conduct a robustness analysis of this parallel model. In this analysis we study the relationship between the migration stage of the parallel model and the quality of the resulting solutions. In addition, we also carry out a scalability study of the parallel model. In this case, we analyse the impact that the migration stage has on the scalability of the entire parallel model. Computational results with several real network instances have validated our proposed approach. The best-known frequency plans for two real-world network instances are improved with this strategy.  相似文献   

    18.
    In this paper, we present Para Miner which is a generic and parallel algorithm for closed pattern mining. Para Miner is built on the principles of pattern enumeration in strongly accessible set systems. Its efficiency is due to a novel dataset reduction technique (that we call EL-reduction), combined with novel technique for performing dataset reduction in a parallel execution on a multi-core architecture. We illustrate Para Miner’s genericity by using this algorithm to solve three different pattern mining problems: the frequent itemset mining problem, the mining frequent connected relational graphs problem and the mining gradual itemsets problem. In this paper, we prove the soundness and the completeness of Para Miner. Furthermore, our experiments show that despite being a generic algorithm, Para Miner can compete with specialized state of the art algorithms designed for the pattern mining problems mentioned above. Besides, for the particular problem of gradual itemset mining, Para Miner outperforms the state of the art algorithm by two orders of magnitude.  相似文献   

    19.
    Given a set of input points, the Steiner Tree Problem (STP) is to find a minimum-length tree that connects the input points, where it is possible to add new points to minimize the length of the tree. Solving the STP is of great importance since it is one of the fundamental problems in network design, very large scale integration routing, multicast routing, wire length estimation, computational biology, and many other areas. However, the STP is NP-hard, which shatters any hopes of finding a polynomial-time algorithm to solve the problem exactly. This is why the majority of research has looked at finding efficient heuristic algorithms. Additionally, many authors focused their work on utilizing the ever-increasing computational power and developed many parallel and distributed methods for solving the problem. In this way we are able to obtain better results in less time than ever before. Here, we present a survey of the parallel and distributed methods for solving the STP and discuss some of their applications.  相似文献   

    20.
    A planning and scheduling (P&S) system takes as input a domain model and a goal, and produces a plan of actions to be executed, which will achieve the goal. A P&S system typically also offers plan execution and monitoring engines. Due to the non-deterministic nature of planning problems, it is a challenge to construct correct and reliable P&S systems, including, for example, declarative domain models. Verification and validation (V&V) techniques have been applied to address these issues. Furthermore, V&V systems have been applied to actually perform planning, and conversely, P&S systems have been applied to perform V&V of more traditional software. This article overviews some of the literature on the fruitful interaction between V&V and P&S.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号