首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
We outline a model for programs and data and present a formal definition of an ideal change merging operation. This model is used to develop a new semantically based method for combining changes to programs. We also evaluate the appropriateness of the change merging operation and examine some circumstances where the specifications of a program as well as the implementations can be used to guide the change merging process in cases where the implementations conflict but the specifications do not. This work was performed for the Office of Naval Research and funded by the Naval Postgraduate School.  相似文献   

2.
In here we describe a technique to merge at source level two (and hence more) independent C programs. Due to the independence of the programs, the merged program has more parallelism that can be extracted by the underlying compiler and CPU. Thus it is expected that the execution time of the merged program will be better than the time obtained by executing the two programs separately. The usefulness of such merging for embedded systems has been studied and demonstrated by the works of Dean and others with the Thrint compiler for merging threads at Assembly level. The main contribution of this work is an efficient algorithm for matching sub-components considering the inside structure of the sub-components and not only their execution frequency. Two novel techniques for balancing the merge of sub-components are presented:
Residual loop merging (RLM) as a way to merge loops with different nesting and execution frequency levels.  相似文献   

3.
在神威高性能多核服务器上,自动并行化编译系统为识别和申明程序中的并行性,产生的OpenMP程序没有经过充分的优化,其采用简单的fork-join模型,存在大量的并行循环嵌套,导致运行效率低。为提升自动并行化编译系统产生的OpenMP程序的运行效率,提出一种并行域重构优化技术。并行域重构技术通过合并程序中的并行域和扩展嵌套循环中的并行域范围,减少OpenMP程序的并行域数目,降低线程组频繁创建和合并等控制开销,将简单fork-join模型的OpenMP程序转换为性能更为高效的单程序多数据模型的OpenMP程序。实验结果表明,在新一代神威高性能多核服务器SW1621平台上,并行域重构技术在NPB3.3-OMP测试集和SPEC OMP2012测试集上的运行效率分别提高了10.77%和7.94%的,可有效提升自动并行化编译系统OpenMP程序的执行效率。  相似文献   

4.
This paper describes a tool for debugging programs which develop faults after they have been modified or are ported to other computer systems. The tool enhances the traditional debugging approach by automating the comparison of data structures between two running programs. Using this technique, it is possible to use early versions of a program which are known to operate correctly to generate values for comparison with the new program under development. The tool allows the reference code and the program being developed to execute on different computer systems by using open distributed systems techniques. A data visualisation facility allows the user to view the differences in data structures. By using the data flow of the code, it is possible to locate faulty sections of code rapidly. An evaluation is performed by using three case studies to illustrate the power of the technique.  相似文献   

5.
Mining frequent itemsets over data streams has attracted much research attention in recent years. In the past, we had developed a hash-based approach for mining frequent itemsets over a single data stream. In this paper, we extend that approach to mine global frequent itemsets from a collection of data streams distributed at distinct remote sites. To speed up the mining process, we make the first attempt to address a new problem on continuously maintaining a global synopsis for the union of all the distributed streams. The mining results therefore can be yielded on demand by directly processing the maintained global synopsis. Instead of collecting and processing all the data in a central server, which may waste the computation resources of remote sites, distributed computations over the data streams are performed. A distributed computation framework is proposed in this paper, including two communication strategies and one merging operation. These communication strategies are designed according to an accuracy guarantee of the mining results, determining when and what the remote sites should transmit to the central server (named coordinator). On the other hand, the merging operation is exploited to merge the information received from the remote sites into the global synopsis maintained at the coordinator. By the strategies and operation, the goal of continuously maintaining the global synopsis can be achieved. Rooted in the continuously maintained global synopsis, we propose a mining algorithm for finding global frequent itemsets. Moreover, the correctness guarantees of the communication strategies and merging operation, and the accuracy guarantee analysis of the mining algorithm are provided. Finally, a series of experiments on synthetic datasets and a real dataset are performed to show the effectiveness and efficiency of the distributed computation framework.  相似文献   

6.
Lock-freedom is a property of concurrent programs which states that, from any state of the program, eventually some process will complete its operation. Lock-freedom is a weaker property than the usual expectation that eventually all processes will complete their operations. By weakening their completion guarantees, lock-free programs increase the potential for parallelism, and hence make more efficient use of multiprocessor architectures than lock-based algorithms. However, lock-free algorithms, and reasoning about them, are considerably more complex.In this paper we present a technique for proving that a program is lock-free. The technique is designed to be as general as possible and is guided by heuristics that simplify the proofs. We demonstrate our theory by proving lock-freedom of two non-trivial examples from the literature. The proofs have been machine-checked by the PVS theorem prover, and we have developed proof strategies to minimise user interaction.  相似文献   

7.
徐杨  袁峰  林琪  汤德佑  李东 《软件学报》2018,29(2):396-416
流程挖掘是流程管理和数据挖掘交叉领域中的一个研究热点.在实际业务环境中,流程执行的数据往往分散记录到不同的事件日志中,需要将这些事件日志融合成为单一事件日志文件,才能应用当前基于单一事件日志的流程挖掘技术.然而,由于流程日志间存在着执行实例的多对多匹配关系、融合所需信息可能缺失等问题,导致事件日志融合问题具有较高挑战性.本文对事件日志融合问题进行了形式化定义,指出该问题是一个搜索优化问题,并提出了一种基于混合人工免疫算法的事件日志融合方法:以启发式方法生成初始种群,人工免疫系统的克隆选择理论基础,通过免疫进化获得“最佳”的融合解,从而支持包含多对多的实例匹配关系的日志融合;考虑两个实例级别的因素:流程执行路径出现的频次和流程实例间的时间匹配关系,分别从“量”匹配和“时间”匹配两个维度来评价进化中的个体;通过设置免疫记忆库、引入模拟退火机制,保证新一代种群的多样性,减少进化早熟几率.实验结果表明,本文的方法能够实现多对多的实例匹配关系的事件日志融合的目标,相比随机方法生成初始种群,启发式方法能加快免疫进化的速度.文中还针对利用分布式技术提高事件日志融合性能,探讨了大规模事件日志的分布式融合中的数据划问题.  相似文献   

8.
The bulk synchronous parallel (BSP) model is underpinned by a global synchronisation operation. This operation both synchronises processes and updates local stores using asynchronous communications. In this paper a generalised, nondeterministic, substitution axiom (over functions) is identified with BSP barrier synchronisation. Correctness proofs of BSP programs based on the axiom are structurally similar to proofs of correctness of sequential programs. Global synchronisation is often considered to be relatively expensive to implement on distributed architectures for applications having low levels of communications. In an extreme situation a process having no external interactions may have to wait unnecessarily at a barrier. In this paper a form of barrier operation is introduced which is straightforward to reason about and, with a view to increasing the potential for efficient implementation, permits barrier operations to be relaxed through partial uncoupling thereby allowing processes which do not need to synchronise to proceed.Work partially supported by Spanish CICYT under grant TIC2002-04498-C05-03 (Tracer), IST program of the EU under contract IST-2001-33116 (Flags), Future and Emerging Technologies programme of the EU under contract number IST-1999-14186 (ALCOM-FT).Received February 2003 Accepted in revised form August 2003 by R. C. Backhouse  相似文献   

9.
Presents a modeling approach based on stochastic Petri nets to estimate the reliability and availability of programs in a distributed computing system environment. In this environment, successful execution of programs is conditioned on the successful access of related files distributed throughout the system. The use of stochastic Petri nets is demonstrated by extending a basic reliability model to account for repair actions when faults occur. To this end, two possible models are discussed: the global repair model, which assumes a centralized repair team that restores the system to its original status when a failure state is reached, and the local repair model, which assumes that repairs are localized to the node where they occur. The former model is useful in evaluating the availability of programs (or the availability of the hardware support) subject to hardware faults that are repaired globally; therefore, the programs of interest can be interrupted. On the other hand, the latter model can be used to evaluate program reliability in the presence of hardware faults subject to repair, without interrupting the normal operation of the system  相似文献   

10.
基于锁集合的动态数据竞争检测方法   总被引:7,自引:0,他引:7  
数据竞争使得共享存储程序难于调试.以前大部分针对共享存储程序的动态数据竞争检测工作都是通过维护发生序来实现.这种方法有一个重要缺点,即针对程序的一种输入,对程序的一次执行进行检测,不能检测出所有的可行数据竞争.文中利用存储一致性模型的框架模型,针对域一致性模型提出了增强发生序概念,并依此得出一种基于锁集合的动态数据竞争检测算法,克服了这个问题.在软件DSM系统JIAJIA上的实现获得了很好的性能,应用平均减速比为3.14.利用该方法,在TSP程序中找到了大量的读写数据竞争的情况.  相似文献   

11.
Concurrent processes can be used both for programming computation and for programming storage. Previous implementations of Flat GHC, however, have been tuned for computation-intensive programs, and perform poorly for storage-intensive programs (such as programs implementing reconfigurable data structures using processes and streams) and demand-driven programs. This paper proposes an optimization technique for programs in which processes are almost always suspended. The technique compiles unification for data transfer into message passing. Instead of reducing the number of process switching operations, the technique optimizes the cost of each process switching operation and reduces the number ofcons operations for data buffering. The technique is based on a mode system which is powerful enough to analyze bidirectional communication and streams of streams. The mode system is based on mode constraints imposed by individual clauses rather than on global dataflow analysis, enabling separate analysis of program modules. The introduction of a mode system into Flat GHC effectively subsets Flat GHC; the resulting language is calledModed Flat GHC. Moded Flat GHC programs enjoy two important properties under certain conditions: (1) reduction of a goal clause retains the well-modedness of the clause, and (2) when execution terminates, all the variables in an initial goal clause will be bound to ground terms. Practically, the computational complexity of all-at-once mode analysis can be made almost linear with respect to the sizen of the program and the complexity of the data structures used, and the complexity of separate analysis is higher only by O(logn) times. Mode analysis provides useful information for debugging as well. Benchmark results show that the proposed technique well improves the performance of storage-intensive programs and demand-driven programs compared with a conventional native-code implementation. It also improves the performance of some computation-intensive programs. We expect that the proposed technique will expand the application areas of concurrent logic languages.  相似文献   

12.
A distributed program is a collection of several processes which execute concurrently, possibly in different nodes of a distributed system, and which cooperate with each other to realize a common goal. In this paper, we present a design of communication and synchronization primitives for distributed programs. The primitives are designed such that they can be provided by a kernel of a distributed operating system. An important feature of the design is that the configuration of a process, i.e., identities of processes with which the process communicates, is specified separately from the computation performed by the process. This permits easy configuration and reconfiguration of processes. We identify different kinds of communication failures, and provide distinct mechanisms for handling them. The communication primitives are not atomic actions. To enable the construction of atomic actions, two new program components, atomic agent and manager are introduced. These are devoid of policy decisions regarding concurrency control and atomic commitment. We introduce the notion of conflicts relation using which a designer can construct either an optimistic or a pessimistic concurrency control scheme. The design also incorporates primitives for constructing nested atomic actions.  相似文献   

13.
An adaptive program is one that changes its behavior base on the current state of its environment. This notion of adaptivity is formalized, and a logic for reasoning about adaptive programs is presented. The logic includes several composition operators that can be used to define an adaptive program in terms of given constituent programs; programs resulting from these compositions retain the adaptive properties of their constituent programs. The authors begin by discussing adaptive sequential programs, then extend the discussion to adaptive distributed programs. The relationship between adaptivity and self-stabilization is discussed. A case study for constructing an adaptive distributed program where a token is circulated in a ring of processes is presented  相似文献   

14.
基于C/S模型机器人控制器的研究及其应用   总被引:6,自引:1,他引:6  
王宏杰  颜国正  林良明 《机器人》2002,24(3):228-233
随着网络通信技术的发展和控制系统的革新,如何将网络技术应用到机器人系统,使机器人 达到开放式、分布式的网络控制系统,是目前机器人控制器研究的热点.本文提出了基于C/ S(Client/Server)模型的机器人控制器实现原理,阐述基于Win NT操作系统的RPC(Remot e Process Call)机制的机器人控制网络客户服务器程序的开发,并实现了软件的DCOM封装 .最后,以实例说明了利用DCOM技术开发的Win NT操作系统下的Win32机器人柔性自动化应 用程序和基于Web的远程机器人监控系统,显示了基于C/S模型的机器人控制器分布式控制网 络的优点.  相似文献   

15.
In a typical distributed computing system (DCS), nodes consist of processing elements, memory units, shared resources, data files, and programs. For a distributed application, programs and data files are distributed among many processing elements that may exchange data and control information via communication link. The reliability of DCS can be expressed by the analysis of distributed program reliability (DPR) and distributed system reliability (DSR). In this paper, two reliability measures are introduced which are Markov-chain distributed program reliability (MDPR) and Markov-chain distributed system reliability (MDSR) to accurately model the reliability of DCS. A discrete time Markov chain with one absorbing state is constructed for this problem. The transition probability matrix is employed to represent the transition probability from one state to another state in a unit of time. In addition to mathematical method to evaluate the MDPR and MDSR, a simulation result is also presented to prove its correction.  相似文献   

16.
Since today’s television can receive more and more programs, and televisions are often viewed by groups of people, such as a family or a student dormitory, this paper proposes a TV program recommendation strategy for multiple viewers based on user profile merging. This paper first introduces three alternative strategies to achieve program recommendation for multiple television viewers, discusses, and analyzes their advantages and disadvantages respectively, and then chooses the strategy based on user profile merging as our solution. The selected strategy first merges all user profiles to construct a common user profile, and then uses a recommendation approach to generate a common program recommendation list for the group according to the merged user profile. This paper then describes in detail the user profile merging scheme, the key technology of the strategy, which is based on total distance minimization. The evaluation results proved that the merging result can appropriately reflect the preferences of the majority of members within the group, and the proposed recommendation strategy is effective for multiple viewers watching TV together.  相似文献   

17.
Concurrent programming is more difficult to use and understand than sequential programming. In order to simplify this type of programming a number of approaches have been developed such as visual programming. Visual Occam (VISO) is a visual programming language for concurrent programming. It has a graphical syntax based on the language Occam and its semantics is represented both in petri net and process calculus. This paper presents a modular visual approach to write concurrent programs using the VISO language. Concurrent programs in VISO are specified graphically at different levels of abstraction. This paper describes this modular visual approach by constructing two examples in VISO. The first example is a simple concurrent program and it is mainly used to show the details of constructing a concurrent program in VISO. The second example is a larger concurrent program with more levels of abstraction. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

18.
分布式应用程序中数据访问技术的剖析   总被引:7,自引:5,他引:2  
阐述了在开发软件系统的过程中两种分布式应用程序运行时出现的问题,分析研究了数据库服务器上的进程以及客户和服务器两端操作系统上的应用进程,找出了问题的原因所在,提出了多种切实可行的解决方法,并比较了两种分布式应用程序使用ADO.Net访问数据时存在的差别。  相似文献   

19.
Finding changed identifiers is important for understanding the difference between two versions of a program and for detecting and resolving conflicts while merging variants of a program together. Standard practice for differencing and merging relies on line based techniques that do not recognize renamed identifiers. The design and implementation of a tool to automatically detect renamed identifiers between two versions of a program is presented. The system uses an abstract representation of language constructs to enable language awareness without introducing language dependence. Modules for Java and Scheme have been written. The detector works with multiple file pairs, taking into account renamings that span several files. A case study is presented that demonstrates proof of concept. The detector is part of a suite of intelligent differencing and merging programs that exploit the static semantics of programming languages.  相似文献   

20.
Program Slicing is a well-known decomposition technique that transforms a large program into a smaller one that contains only statements relevant to the computation of a selected function. In this paper, we present two novel predicate-based dynamic slicing algorithms for message passing programs. Unlike more traditional slicing criteria that focus only on parts of the program that influence a variable of interest at a specific position in the program, a predicate focuses on those parts of the program that influence the predicate. The dynamic predicate slices capture some global requirements or suspected error properties of a distributed program and computes all statements that are relevant. The presented algorithms differ from each other in their computational approaches (forward versus backward) and in the granularity of information they provide. A proof of correctness of these algorithms is provided. Through the introduction of dominant states and dominant events, critical statement executions are identified that change the value of the global predicate. Under this formulation, optimizing dynamic predicate slicing becomes a meaningful goal as well. Finally, we present how predicate slices can be applied to support comprehension tasks for analyzing and maintaining distributed programs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号