首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we propose a technique to automate the process of building translators between operations languages, a family of DSLs used to program satellite operations procedures. We exploit the similarities between those languages to semi-automatically build a transformation schema between them, through the use of annotated grammars. To improve the overall translation process even more, reducing its complexity, we also propose an intermediate representation common to all operations languages. We validate our approach by semi-automatically deriving translators between some operations languages, using a prototype tool which we implemented for that purpose.  相似文献   

2.
We propose an automatic method for derivinglinear size relations, which specify, with respect to some given norm, linear relationships between the sizes of the arguments of atoms in the least Herbrand model of a definite Horn clause program. The method is presented as an application of abstract interpretation. Its abstract domain consists of affine subspaces or linear varieties, and operations on elements of the domain are expressed in terms of operations from linear algebra. The main application of the technique is situated in automatic termination analysis. Others are complexity and granularity analysis and the specialisation of constraints in constraint logic languages.  相似文献   

3.
Automatically generating program translators from source and target language specifications is a non-trivial problem. In this paper we focus on the problem of automating the process of building translators between operations languages, a family of DSLs used to program satellite operations procedures. We exploit their similarities to semi-automatically build transformation tools between these DSLs. The input to our method is a collection of annotated context-free grammars. To simplify the overall translation process even more, we also propose an intermediate representation common to all operations languages. Finally, we discuss how to enrich our annotated grammars model with more advanced semantic annotations to provide a verification system for the translation process. We validate our approach by semi-automatically deriving translators between some real world operations languages, using the prototype tool which we implemented for that purpose.  相似文献   

4.
Reducibility concepts are fundamental in complexity theory. Usually, they are defined as follows: A problem Π is reducible to a problems Σ if Π can be computed using a program or device for Σ as a subroutine. However, this approach has its limitations if restricted computational models are considered. In the case of ordered binary decision diagrams (OBDDs), it allows the use of merely the almost unmodified original program for the subroutine. Here we propose a new reducibility for OBDDs: We say that Π is reducible to Σ if and OBDD for Π can be constructed by applying a sequence of elementary operations to an OBDD for Σ. In contrast to traditional reducibility notions, the newly introduced reduction is able to reflect the real needs of a reducibility concept in the context of OBDD-based complexity classes: it allows the reduction of problems to others which are computable with the same amount of OBDD-resources and it gives a tool to carry over lower and upper bounds. The authors are grateful to DAAD Acciones Integradas, Grant No. 322-ai-e-dr.  相似文献   

5.
6.
A static analysis method for verifying timing properties of real-time distributed programs is presented. The goal is to calculate the worst-case response time of concurrent tasks which run mainly independently but share, and may have to wait for, logical or physical devices. For such tasks, the determination of the worst-case waiting time is a crucial problem because of the unpredictable order of synchronization events. We investigate the class of distributed Client-Server programs in which independent, time-critical tasks (clients) are synchronized only through additional server tasks, playing the role of monitors or resource managers. This model follows well-known real-time design guidelines for distributed ADA programs proposed to enhance schedulability and synchronization analysis. Our formal analysis approach is flow graph oriented. It leads to generating reduced program paths each of which represents a sequence of ordered local and global operations, thus transforming and reducing the original problem of computing the worst-case waiting time of a concurrent task into a graph-theoretic problem of calculating the maximal blocking time for one of its corresponding program paths. While local operations are completely independent global operations require mutually exclusive access to shared resources. We prove that computing the worst-case blocking time for a program path is NP-complete. Even for a reduced problem solution—which would yield a good upper bound for the worst-case blocking time—there was a conjecture maintained over many years that this problem was NP-complete. A major result of this paper is to show that this is wrong. Instead, we construct a polynomial solution algorithm, and we prove its correctness. The effectiveness and complexity of our method are discussed, with particular emphasis on distributed real-time debugging.  相似文献   

7.
We propose a measure of program complexity which takes into account both the relationship between statements and the relationships between statements and data objects (constants and variables). This measure, called program flow complexity, can be calculated from the source text of a program in an easy way.  相似文献   

8.
Obtaining an optimal schedule for a set of precedence-constrained tasks is a well-known NP-complete problem in its general form. In view of the intractability of the problem, most of the previous work relies on heuristics that try to find reasonably high quality solutions in an acceptable amount of time. While optimal polynomial-time algorithms are known only for a few simple cases (and in other cases can only be obtained through an exhaustive search with prohibitively high time complexity), they may be critically important for applications in which performance is the prime objective. Optimal solutions can also serve as a reference to test the performance of various heuristics. Moreover, an optimal schedule for a program at hand needs to be determined only once (and off-line) but the program using that schedule is in general executed several times. In this paper, we propose optimal algorithms for static scheduling of task graphs with arbitrary parameters to multiple homogeneous processors. The first algorithm is based on the A* search technique and uses a computationally efficient cost function for guiding the search with reduced complexity. Additionally, we propose a number of effective state-pruning techniques to reduce the search space. For further lowering the complexity, we propose an efficient parallelization of the search algorithm. We parallelize the algorithm with reduced interprocessor communication as well as with static and dynamic load-balancing schemes to evenly distribute the search states to the processors. We also propose an approximate algorithm that guarantees a bounded deviation from the optimal solution but executes in a considerably shorter time. Based on an extensive experimental evaluation of the algorithms, we conclude that the parallel algorithm with pruning techniques is an efficient scheme for generating optimal solutions of reasonably large problems while the approximate algorithm is effective if slightly degraded solutions are acceptable.  相似文献   

9.
We propose an algorithm for solving region-to-region visibility problems on digital terrain models using data parallel machines. Since global communication is the bottleneck in this kind of algorithm, the algorithm we propose focuses on the reduction of global communication. The algorithm analyses a strip of the source region at a time and sweeps through the source strip by strip. At most four sweeps are needed for the analysis. By exploring the coherence properties in the processor structure, global communication is minimized and complexity is substantially improved. Furthermore, all global write operations are exclusive and concurrency in global read operations is minimized. Since the problem size is usually large, we also designed rules of decomposition to efficiently handle the cases where the required number of processors is greater than available. The algorithm has been implemented on a Connection Machine CM-2, and results of computational experiments are presented.  相似文献   

10.
Nowadays, every firm uses telecommunication networks in different amounts and ways in order to complete their daily operations. In this article, we investigate an optimisation problem that a firm faces when acquiring network capacity from a market in which there exist several network providers offering different pricing and quality of service (QoS) schemes. The QoS level guaranteed by network providers and the minimum quality level of service, which is needed for accomplishing the operations are denoted as fuzzy numbers in order to handle the non-deterministic nature of the telecommunication network environment. Interestingly, the mathematical formulation of the aforementioned problem leads to the special case of a well-known two-dimensional bin packing problem, which is famous for its computational complexity. We propose two different heuristic solution procedures that have the capability of solving the resulting nonlinear mixed integer programming model with fuzzy constraints. In conclusion, the efficiency of each algorithm is tested in several test instances to demonstrate the applicability of the methodology.  相似文献   

11.
Radio frequency identification (RFID) system is a contactless automatic identification system, which uses small and low cost RFID tags. The primary problem of current security and privacy preserving schemes is that, in order to identify only one single tag, these schemes require a linear computational complexity on the server side. We propose an efficient mutual authentication protocol for passive RFID tags that provides confidentiality, untraceability, mutual authentication, and efficiency. The proposed protocol shifts the heavy burden of asymmetric encryption and decryption operations on the more powerful server side and only leaves lightweight hash operation on tag side. It is also efficient in terms of time complexity, space complexity, and communication cost, which are very important for practical large-scale RFID applications.  相似文献   

12.
朱一清 《计算机工程》2012,38(18):30-33
针对当前并发程序的不确定性和复杂性,以及程序原子性质获取困难的问题,提出一种并发程序原子性质提取方法。将并发程序中的同步区域转化为与并发操作相关的并发操作图后,采用频繁子图挖掘算法自动提取程序中的原子图,使其能刻画并发程序的原子性质,包括并发操作以及操作之间的控制依赖关系。实验结果证明,该方法能以较低的误测率有效提取并发程序的原子性质。  相似文献   

13.
We propose an algorithm for the class of connected row convex constraints. In this algorithm, we introduce a novel variable elimination method to solve the constraints. This method is simple and able to make use of the sparsity of the problem instances. One of its key operations is the composition of two constraints. We have identified several nice properties of connected row convex constraints. Those properties enable the development of a fast composition algorithm whose complexity is linear to the size of the variable domains. Compared with the existing work including randomized algorithms, the new algorithm has favorable worst case time and working space complexity. Experimental results also show a significant performance margin over the existing consistency based algorithms.  相似文献   

14.
We study in this paper the problem of finding in a graph a subset of k edges whose deletion causes the largest increase in the weight of a minimum spanning tree. We propose for this problem an explicit enumeration algorithm whose complexity, when compared to the current best algorithm, is better for general k but very slightly worse for fixed k. More interestingly, unlike in the previous algorithms, we can easily adapt our algorithm so as to transform it into an implicit enumeration algorithm based on a branch and bound scheme. We also propose a mixed integer programming formulation for this problem. Computational results show a clear superiority of the implicit enumeration algorithm both over the explicit enumeration algorithm and the mixed integer program.  相似文献   

15.
We present a mixed integer linear program for the rapid transit network design problem with static modal competition. Previous discrete formulations cannot handle modal competition for realistic size instances because of the complexity of modeling alternatives for each flow in the network. We overcome this difficulty by exploiting a pre-assigned topological configuration. We discuss relevant goals of rapid transit planning, and we propose a multi-objective model conducive to a post-optimization analysis for effectiveness, efficiency, and equity concerns. A case study carried out for a metro proposal in Concepción, Chile, shows the suitability of the proposed method consisting of the mixed integer linear program coupled with the post-optimization analysis.  相似文献   

16.
Mäkinen  Ukkonen  Navarro 《Algorithmica》2003,35(4):347-369
We focus on the problem of approximate matching of strings that have been compressed using run-length encoding. Previous studies have concentrated on the problem of computing the longest common subsequence (LCS) between two strings of length m and n , compressed to m' and n' runs. We extend an existing algorithm for the LCS to the Levenshtein distance achieving O(m'n+n'm) complexity. Furthermore, we extend this algorithm to a weighted edit distance model, where the weights of the three basic edit operations can be chosen arbitrarily. This approach also gives an algorithm for approximate searching of a pattern of m letters (m' runs) in a text of n letters (n' runs) in O(mm'n') time. Then we propose improvements for a greedy algorithm for the LCS, and conjecture that the improved algorithm has O(m'n') expected case complexity. Experimental results are provided to support the conjecture.  相似文献   

17.
Reference counting is a commonly used technique for resource management. One key correctness criterion in the use of reference counts is that increment and decrement operations must be well-matched. In this paper we consider the problem of statically verifying that a given (sequential) program uses reference counts correctly: that is, that the program performs an equal number of increment and decrement operations on every object. We present a polynomial time algorithm for verifying this property when the program is allowed to have only shallow pointers: that is, the program may contain pointers to reference count objects, but the program does not contain pointers to pointers. We show that the problem is intractable if general (non-shallow) pointers are allowed. Our polynomial time algorithm, for the case of shallow pointers, works by reducing the problem to that of an affine-relation analysis problem.  相似文献   

18.
Recently a number of machine vision systems have been successfully implemented using pipeline architectures and various new algorithms have been proposed. In this paper we propose a method of analysis of both time complexity and space complexity for algorithms using conventional general purpose pipeline architectures. We illustrate our method by applying it to an algorithm schema for local window operations satisfying a property we define as decomposability. It is shown that the proposed algorithm schema and its analysis generalize previous published results. We further analyse algorithms implementing operators that are not decomposable. In particular the complexities of several median-type operations are compared and the implication on algorithm choice is discussed. We conclude with discussions on space-time trade-offs and implementation issues.This research was partially supported by a grant from the Natural Science and Engineering Research Council of Canada. Part of this work was done while the author was at the University of Guelph, Guelph, Ontario, Canada.  相似文献   

19.
Reference counting is a commonly used technique for resource management. One key correctness criterion in the use of reference counts is that increment and decrement operations must be well-matched. In this paper we consider the problem of statically verifying that a given (sequential) program uses reference counts correctly: that is, that the program performs an equal number of increment and decrement operations on every object. We present a polynomial time algorithm for verifying this property when the program is allowed to have only shallow pointers: that is, the program may contain pointers to reference count objects, but the program does not contain pointers to pointers. We show that the problem is intractable if general (non-shallow) pointers are allowed. Our polynomial time algorithm, for the case of shallow pointers, works by reducing the problem to that of an affine-relation analysis problem.  相似文献   

20.
This paper considers the relocation problem arising from public re-development projects cast as a two-machine flowshop scheduling problem. In such a project, some buildings need to be torn down and re-constructed. The two processes of tearing down and re-constructing each building are often viewed as a single operation. However, under certain circumstances, the re-construction process, i.e., the resource recycling process, can be viewed as a separate operation. In this paper we regard these two processes as separate on the assumption that they are handled by different working crews. We formulate the problem as a resource-constrained two-machine flowshop scheduling problem with the objective of finding a feasible re-development sequence that minimizes the makespan. We provide problem formulations, discuss the complexity results, and present polynomial algorithms for various special cases of the problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号