首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Trace-based derivation of a scalable lock-free stack algorithm   总被引:1,自引:1,他引:0  
We show how a sophisticated, lock-free concurrent stack implementation can be derived from an abstract specification in a series of verifiable steps. The algorithm is based on the scalable stack algorithm of Hendler et al. (Proceedings of the sixteenth annual ACM symposium on parallel algorithms, 27–30 June 2004, Barcelona, Spain, pp 206–215), which allows push and pop operations to be paired off and eliminated without affecting the central stack, thus reducing contention on the stack, and allowing multiple pairs of push and pop operations to be performed in parallel. Our algorithm uses a simpler data structure than Hendler, Shavit and Yerushalmi’s, and avoids an ABA problem. We first derive a simple lock-free stack algorithm using a linked-list implementation, and discuss issues related to memory management and the ABA problem. We then add an abstract model of the elimination process, from which we derive our elimination algorithm. This allows the basic algorithmic ideas to be separated from implementation details, and provides a basis for explaining and comparing different variants of the algorithm. We show that the elimination stack algorithm is linearisable by showing that any execution of the implementation can be transformed into an equivalent execution of an abstract model of a linearisable stack. Each step in the derivation is either a data refinement which preserves the level of atomicity, an operational refinement which may alter the level of atomicity, or a refactoring step which alters the structure of the system resulting from the preceding derivation. We verify our refinements using an extension of Lipton’s reduction method, allowing concurrent and non-concurrent aspects to be considered separately.  相似文献   

2.
A Functional Abstract Notation (FAN) is proposed for the specification and design of parallel algorithms by means of skeletons - high-level patterns with parallel semantics. The main weakness of the current programming systems based on skeletons ii that the user is still responsible for finding the most appropriate skeleton composition for a given application and a given parallel architecture

We describe a transformational framework for the development of skeletal programs which is aimed at filling this gap. The framework makes use of transformation rules which are semantic equivalences among skeleton compositions. For a given problem, an initial, possibly inefficient skeleton specification is refined by applying a sequence of transformations. Transformations are guided by a set of performance prediction models which forecast the behavior of each skeleton and the performance benefits of different rules. The design process is supported by a graphical tool which locates applicable transformations and provides performance estimates, thereby helping the programmer in navigating through the program refinement space. We give an overview of the FAN framework and exemplify its use with performance-directed program derivations for simple case studies. Our experience can be viewed as a first feasibility study of methods and tools for transformational, performance-directed parallel programming using skeletons.  相似文献   

3.
曲仁军  徐珍珍  张永军 《软件》2012,(8):138-143
论文在对已有的各种使用图像处理手段进行仪表指针识别方案进行研究的基础上,根据工控环境中指针式仪表所具有特点,设计了基于多分辨率处理、霍夫变换的快速、准确的指针识别算法,该算法是已有识别方案的改进与融合。为了对算法性能进行测试,论文在所提算法的基础上,设计了简单并且容易部署的指针识别系统。识别系统的运行数据证明,论文所提算法可在资源有限的嵌入式系统中对工控环境中的指针式仪表读数进行快速而准确的识别。  相似文献   

4.
Pointer analysis is a technique to identify at copile-time the potential values of the pointer expressions in a program,which promises significant benefits for optimzing and parallelizing complilers.In this paper,a new approach to pointer analysis for assignments is presented.In this approach,assignments are classified into three categories:pointer assignments,structure(union)assignents and normal assignments which don‘t affect the point-to information.Pointer analyses for these three kinds of assignments respectively make up the integrated algorithm.When analyzing a pointer assigemtn.a new method called expression expansion is used to calculate both the left targets and the right targets.The integration of recursive data structure analysis into pointer analysis is a significant originality of this paper,which uniforms the pointer analysis for heap variables and the pointer analysis for stack variables.This algorithm is implemented in Agassiz,an analyzing tool for C programs developed by Institute of Parallel Processing,Fudan University,Its accuracy and effectiveness are illustrated by experimental data.  相似文献   

5.
This paper examines a class of algorithms for “compiling” influence diagrams into a set of simple decision rules. These decision rules define simple-to-execute, complete, consistent, and near-optimal decision procedures. These compilation algorithms can be used to derive decision procedures for manually solving time constrained decision problems  相似文献   

6.
We present a Reinforcement Learning (RL) algorithm based on policy iteration for solving average reward Markov and semi-Markov decision problems. In the literature on discounted reward RL, algorithms based on policy iteration and actor-critic algorithms have appeared. Our algorithm is an asynchronous, model-free algorithm (which can be used on large-scale problems) that hinges on the idea of computing the value function of a given policy and searching over policy space. In the applied operations research community, RL has been used to derive good solutions to problems previously considered intractable. Hence in this paper, we have tested the proposed algorithm on a commercially significant case study related to a real-world problem from the airline industry. It focuses on yield management, which has been hailed as the key factor for generating profits in the airline industry. In the experiments conducted, we use our algorithm with a nearest-neighbor approach to tackle a large state space. We also present a convergence analysis of the algorithm via an ordinary differential equation method.  相似文献   

7.
Communication networks form the backbone of our society. Topology control algorithms optimize the topology of such communication networks. Due to the importance of communication networks, a topology control algorithm should guarantee certain required consistency properties (e.g., connectivity of the topology), while achieving desired optimization properties (e.g., a bounded number of neighbors). Real-world topologies are dynamic (e.g., because nodes join, leave, or move within the network), which requires topology control algorithms to operate in an incremental way, i.e., based on the recently introduced modifications of a topology. Visual programming and specification languages are a proven means for specifying the structure as well as consistency and optimization properties of topologies. In this paper, we present a novel methodology, based on a visual graph transformation and graph constraint language, for developing incremental topology control algorithms that are guaranteed to fulfill a set of specified consistency and optimization constraints. More specifically, we model the possible modifications of a topology control algorithm and the environment using graph transformation rules, and we describe consistency and optimization properties using graph constraints. On this basis, we apply and extend a well-known constructive approach to derive refined graph transformation rules that preserve these graph constraints. We apply our methodology to re-engineer an established topology control algorithm, kTC, and evaluate it in a network simulation study to show the practical applicability of our approach.  相似文献   

8.
Human programmers seem to know a lot about programming. This suggests a way to try to build automatic programming systems: encode this knowledge in some machine-usable form. In order to test the viability of this approach, knowledge about elementary symbolic programming has been codified into a set of about four hundred detailed rules, and a system, called PECOS, has been built for applying these rules to the task of implementing abstract algorithms. The implementation techniques covered by the rules include the representation of mappings as tables, sets of pairs, property list markings, and inverted mappings, as well as several techniques for enumerating the elements of a collection. The generality of the rules is suggested by the variety of domains in which PECOS has successfully implemented abstract algorithms, including simple symbolic programming, sorting, graph theory, and even simple number theory. In each case, PECOS's knowledge of different techniques enabled the construction of several alternative implementations. In addition, the rules can be used to explain such programming tricks as the use of property list markings to perform an intersection of two linked lists in linear time. Extrapolating from PECO's knowledge-based approach and from three other approaches to automatic programming (deductive, transformational, high level language), the future of automatic programming seems to involve a changing role for deduction and a range of positions on the generality-power spectrum.  相似文献   

9.
Traditionally, prototype-based fuzzy clustering algorithms such as the Fuzzy C Means (FCM) algorithm have been used to find “compact” or “filled” clusters. Recently, there have been attempts to generalize such algorithms to the case of hollow or “shell-like” clusters, i.e., clusters that lie in subspaces of feature space. The shell clustering approach provides a powerful means to solve the hitherto unsolved problem of simultaneously fitting multiple curves/surfaces to unsegmented, scattered and sparse data. In this paper, we present several fuzzy and possibilistic algorithms to detect linear and quadric shell clusters. We also introduce generalizations of these algorithms in which the prototypes represent sets of higher-order polynomial functions. The suggested algorithms provide a good trade-off between computational complexity and performance, since the objective function used in these algorithms is the sum of squared distances, and the clustering is sensitive to noise and outliers. We show that by using a possibilistic approach to clustering, one can make the proposed algorithms robust  相似文献   

10.
The work deals with automatic deductive synthesis of functional programs. Formal specification of a program is taken as a mathematical existence theorem and while proving it, we derive a program and simultaneously prove that this program corresponds to given specification. Several problems have to be resolved for automatic synthesis: the choice of synthesis rules that allows us to derive the basic constructions of a functional program, order of rule application and choice of a particular induction rule. The method proposed here is based on the deductive tableau method. The basic method gives rules for functional program construction. To determine the proof strategy we use some external heuristics, including rippling. And for the induction hypothesis formation the combination of rippling and the deductive tableau method became very useful. The proposed techniques are implemented in the system ALISA (Automatic Lisp Synthesizer) and used for automatic synthesis of several functions in the Lisp language. The work has been partially supported by RFBR grant 05-01-00948a.  相似文献   

11.
We describe an approach to verifying concurrent data structures based on simulation between two Input/Output Automata (IOAs), modelling the specification and the implementation. We explain how we used this approach in mechanically verifying a simple lock-free stack implementation using forward simulation, and briefly discuss our experience in verifying three other lock-free algorithms which all required the use of backward simulation.  相似文献   

12.
In this paper we use a program transformational approach to obtain an asymptotically improved may-alias analysis algorithm. We derive an O(N3) time algorithm for computing an intra-procedural flow sensitive may-alias analysis, where N denotes the number of edges in the program control flow graph (CFG). Our algorithm improves the previous O(N5) time algorithm by Hind et al. [19]. Our time complexity improvement comes without any deterioration in space complexity. We also show that for a large subclass of programs in which the in-degree and out-degree of all CFG nodes is bounded by a constant, our algorithm is linear in the sum of the number of edges in the CFG of the program and the size of the output, i.e., the size of the computed alias information, and is therefore asymptotically optimal. Our transformational algorithm derivation technique also leads to a simplified yet precise analysis of time complexity.The work in this paper was done when the author was a graduate student at New York University. This paper was originally submitted when the author was a Research Staff Member at the IBM T.J. Watson Research Center.  相似文献   

13.
This paper concerns calculational methods of refinement in state-based specification languages. Data refinement is a well-established technique for transforming specifications of abstract data types into ones, which are closer to an eventual implementation. The conditions under which a transformation is a correct refinement are encapsulated into two simulation rules: downward and upward simulations.One approach for refining an abstract system is to specify the concrete data type, and then attempt to verify that it is a valid refinement of the abstract type. An alternative approach is to calculate the concrete specification based upon the abstract specification and a retrieve relation, which links the abstract and concrete states. In this paper we generalise existing calculational methods for downward simulations and derive similar results for upward simulations; we also document their use and application in a particular specification language, namely Z.  相似文献   

14.
G. Leoni  R. Sprugoli 《Calcolo》1977,14(3):261-284
Markov algorithms have received very little attention in the studies about formal languages, so the purpose of the present paper is twofold: i) to characterize languages in terms of Markov algorithms, and ii) to produce automatically Markov algorithms accepting or parsing languages generated by given grammars. We use a particular, although universal, subclass of Markov algorithms, which we call “pointer Markov algorithms»; we obtain a characterization of: i) regular, ii) deterministic context-free, and iii) type O languages, which is quite «natural» in terms of these algorithms. Furthermore, we show that, given a right linear or a strongLL(k) grammar, it is possible to produce automatically a pointer Markov algorithm parsing the language generated by the grammar. These constructions are particularly interesting because pointer Markov algorithms can be compiled conveniently into machine code programs.  相似文献   

15.
16.
指针分析是对软件进行编译优化、错误检测的核心基础技术之一.现有经典指针分析框架,如Doop,会将待分析程序和分析算法转化成Datalog评估问题并进行求解,如程序规模较大,单次求解分析时间开销较大.在程序频繁变更发布的情况下,相关程序分析的开销更是难以负担.近年来,增量分析作为一种在代码频繁变更场景下有效复用已有分析结果提升分析效率的技术受到了越来越多的关注.然而,目前的增量指针分析技术通常针对特定算法设计,支持的指针分析选项有限,其可用性也受到较大限制.针对上述问题,本文设计并实现了一种基于差分式Datalog求解的增量指针分析框架DDoop(Differential Doop). DDoop实现了增量输入事实生成技术与增量分析规则自动化重写技术,将多版本程序增量分析问题表达为差分Datalog评估问题,从而可以充分利用成熟的差分式Datalog求解引擎,如DDlog,来实现端到端的增量指针分析,并最大化兼容复用Doop中已有的指针分析实现,提供透明的增量化支持.我们在广泛应用的真实世界程序上对DDoop进行了实验评估,实验结果显示DDoop相较于非增量的Doop框架具有显著的性能优势,同时高度兼容Doop中已有的各种指针分析规则.  相似文献   

17.
Model checking is a well known technique for the verification of finite state models using temporal logic specification. While model checking is suitable for transformational systems (also called closed systems), it is unsuitable for open systems (also known as reactive systems) where the nondeterminism in the environment must be considered during verification. Module checking is an approach for the verification of open systems which have both closed (internal) and open (environment or external) states. It has been demonstrated in [Orna Kupferman, Moshe Y. Vardi, and Pierre Wolper. Module checking. Information and Computation, 164:322–344, 2001] that the complexity of module checking branching time logic CTL is EXPTIME-complete. The approach to module checking is global and the method tries to establish that the property in question holds over all possible environments.This papers develops a local approach to CTL module checking using tableau rules. The proposed approach tries to determine a single environment under which the negation of the property is satisfied over the given module. Such a strategy, thus, leads to a local approach to module checking where we only explore states that are relevant to proving that the negation of the property can be satisfied over the given module using an appropriate witness (environment) that the algorithm also generates. While the worst case complexity of our algorithm is identical to the earlier complexity, we demonstrate that practical implementation of the proposed approach is feasible and yields much better results than the global approach.  相似文献   

18.
扼要分析目前针对标签防碰撞问题采用的防碰撞算法优缺点的基础上,基于跳跃式动态搜索(JDS)算法的思想,提出了一种新颖的JDS标签防碰撞算法。算法将栈思想引入到新的跳跃式策略前后搜索中,避免出现空闲时隙。在读写器问询时,利用标签反馈信息记忆部分已知信息,采用不定长动态传输方式及调整策略识别标签的未知数据位,减少了读写器搜索次数及系统传输量。算法仿真结果表明,系统传输量大大减少,吞吐量有明显提高。  相似文献   

19.
提出了一种安全性更强的指针分析算法。通过对四种常用的指针分析算法的综合分析,选取了Steensgaard算法进行安全性改进,在该算法的基础上通过添加强制类型转换的语义并对算法的类型系统的推导规则进行改进,使其避免了因精确性损耗而导致的攻击的漏报,同时还保持了原有算法复杂度不变。  相似文献   

20.
基于KMP算法的改进算法KMPP   总被引:1,自引:0,他引:1  
KMP算法和BM算法是经典的单模式匹配算法,但KMP算法中文本指针[i]每次只能移动一个字符,整体的匹配效率并不高,结合KMP算法和BM算法的优点提出一种改进算法(KMPP)。算法的思想是模式串与文本在[j]处不匹配时,预算出模式串移动[next[j]]后末字符在文本中的位置,当该位置的文本字符与末字符不匹配时,则用该字符进行坏字符匹配,这两步的跳跃距离就是文本指针[i]移动的距离,从而使指针[i]每次移动的距离达到最大。实验结果表明,该算法匹配次数远低于KMP算法的匹配次数,提高了模式匹配的效率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号