首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Simulation-based assertional techniques and process algebraic techniques are two of the major methods that have been proposed for the verification of concurrent and distributed systems. It is shown how each of these techniques can be applied to the task of verifying systems described as input/output automata; both safety and liveness properties are considered. A small but typical circuit is verified in both of these ways, first using forward simulations, an execution correspondence lemma, and a simple fairness argument, and second using deductions within the process algebra DIOA for I/O automata. An extended evaluation and comparison of the two methods is given.Supported by NSF grant CCR-89-15206, by DARPA contracts N00014-89-J-1988 and N00014-92J-4033, and by ONR contract N00014-91-J-1046.  相似文献   

2.
Summary A new technique for proving timing properties for timing-based algorithms is described; it is an extension of the mapping techniques previously used in proofs of safety properties for asynchronous concurrent systems. The key to the method is a way of representing a system with timing constraints as an automaton whose state includes predictive timing information. Timing assumptions and timing requirements for the system are both represented in this way. A multi-valued mapping from the assumptions automaton to the requirements automaton is then used to show that the given system satisfies the requirements. One type of mapping is based on a collection of progress functions providing measures of progress toward timing goals. The technique is illustrated with two examples, a simple resource manager and a two-process race system. Nancy A. Lynch received the B.S. degree in mathematics from Brooklyn College, Brooklyn, NY, in 1968, and the Ph.D. degree in mathematics from the Massachusetts Institute of Technology, Cambridge, MA, in 1972. She is presently a professor of computer science and electrical engineering at Massachusetts Institute of Technology. She has also been on the computer science faculty at Georgia Institute of Technology and on the mathematics faculty at Tufts University and the University of Southern California. Her research interests are in distributed and real-time computing and theoretical computer science. In particular, she has worked on formal models and verification methods, on algorithm design and analysis, and on impossibility results. She also likes to hike and ski. Hagit Attiya received the B.Sc. degree in Mathematics and Computer Science from the Hebrew University of Jerusalem, in 1981, the M.Sc. and Ph.D. degrees in Computer Science from the Hebrew University of Jerusalem, in 1983 and 1987, respectively. She is presently a senior lecturer at the department of Computer Science at the Technion, Israel Institute of Technology. Prior to this, she has been a post-doctoral research associate at the Laboratory for Computer Science at M.I.T. Her general research interests are distributed computation and theoretical computer science. More specific interests include fault-tolerance, timing-based and asynchronous algorithms.This work was supported by ONR contracts N00014-85-K-0168 and N00014-91-J-1046, by NSF grants CCR-8611442 and CCR-8915206, and by DARPA contracts N00014-87-K-0825 and N00014-89-J-1988  相似文献   

3.
Ordinal optimization of DEDS   总被引:8,自引:0,他引:8  
In this paper we argue thatordinal rather thancardinal optimization, i.e., concentrating on finding good, better, or best designs rather than on estimating accurately the performance value of these designs, offers a new, efficient, and complementary approach to the performance optimization of systems. Some experimental and analytical evidence is offered to substantiate this claim. The main purpose of the paper is to call attention to a novel and promising approach to system optimization.This work is supported by NSF grants CDR-88-03012, DDM-89-14277, ONR contracts N00014-90-J-1093, N00014-89-J-1023, and army contracts DAAL-03-83-K-0171, DAAL-91-G-0194.  相似文献   

4.
Summary A variant of the drinking philosophers algorithm of Chandy and Misra is described and proved correct in a modular way. The algorithm of Chandy and Misra is based on a particular dining philosophers algorithm and relies on certain properties of its implementation. The drinking philosophers algorithm presented in this paper is able to use an arbitrary dining philosophers algorithm as a subroutine; nothing about the implementation needs to be known, only that it solves the dining philosophers problem. An important advantage of this modularity is that by substituting a more time-efficient dining philosophers algorithm than the one used by Chandy and Misra, a drinking philosophers algorithm withO(1) worst-case waiting time is obtained, whereas the drinking philosophers algorithm of Chandy and Misra hasO(n) worst-case waiting time (forn philosophers). Careful definitions are given to distinguish the drinking and dining philosophers problems and to specify varying degrees of concurrency. Jennifer L. Welch received her B.A. in 1979 from the University of Texas at Austin, and her S.M. and Ph.D. from the Massachusetts Institute of Technology in 1984 and 1988 respectively. She has been a member of technical staff at GTE Laboratories Incorporated in Waltham, Massachusetts and an assistant professor at the University of North Carolina at Chapel Hill. She is currently an assistant professor at Texas A&M University. Her research interests include algorithms and lower bounds for distributed computing.Much of this work was performed while this author was at the Laboratory for Computer Science, Massachusetts Institute of Technology, supported by the Advanced Research Projects Agency of the Department of Defense under contract N00014-83-K-0125, the National Science Foundation under grants DCR-83-02391 and CCR-86-11442, the Office of Army Research under contract DAAG29-84-K-0058, and the Office of Naval Research under contract N00014-85-K-0168. This author was also supported in part by NSF grant CCR-9010730, an IBM Faculty Development Award, and NSF Presidential Young Investigator Award CCR-9158478This author was supported by the Office of Naval Research under contract N00014-91-J-1046, the Advanced Research Projects Agency of the Department of Defense under contract N00014-89-J-1988, and the National Science Foundation under grant CCR-89-15206. The photograph and autobiography of Professor N.A. Lynch were published in Volume 6, No. 2, 1992 on page 121  相似文献   

5.
In this paper we describe a deterministic algorithm for solving any 1–1 packet-routing problem on ann ×n mesh in 2n–2 steps using constant-size queues. The time bound is optimal in the worst case. The best previous deterministic algorithm for this problem required time 2n+(n/q) using queues of size (q) for any 1qn, and the best previous randomized algorithm required time 2n+(logn) using constant-size queues.This research was supported by the Clear Center at UTD, DARPA Contracts N00014-91-J-1698 and N00014-92-J-1799, Air Force Contract F49620-92-J-0125, Army Contract DAAL-03-86-K-0171, an NSF Presidential Young Investigator Award with matching funds from AT&T and IBM, and by the Texas Advanced Research Program under Grant No. 3972. A preliminary version of this paper appeared in [5].  相似文献   

6.
The Border Gateway Protocol (BGP) for interdomain routing is designed to allow autonomous systems (ASes) to express policy preferences over alternative routes. We model these preferences as arising from an AS’s underlying utility for each route and study the problem of finding a set of routes that maximizes the overall welfare (ie, the sum of all ASes’ utilities for their selected routes). We show that, if the utility functions are unrestricted, this problem is NP-hard even to approximate closely. We then study a natural class of restricted utilities that we call next-hop preferences. We present a strategyproof, polynomial-time computable mechanism for welfare-maximizing routing over this restricted domain. However, we show that, in contrast to earlier work on lowest-cost routing mechanism design, this mechanism appears to be incompatible with BGP and hence difficult to implement in the context of the current Internet. Our contributions include a new complexity measure for Internet algorithms, dynamic stability, which may be useful in other problem domains. Supported in part by ONR grant N00014-01-1-0795 and NSF grantITR-0219018.Supported by ONR grant N00014-01-1-0795 and NSF grant ITR-0219018. Most of this work was done while the author was at Yale University. Supported in part by NSF grants ITR-0121555 and ANI-0207399. This work was supported by the DoD University Research Initiative (URI) program administered by the Office of Naval Research under Grant N00014-01-1-0795. It was presented in preliminary form at the 2004 ACM Symposium on Principles of Distributed Computing [7]. Portions of this work appeared in preliminary form in the second author’s PhD Thesis [16].  相似文献   

7.
Automatic data allocation to minimize communication on SIMD machines   总被引:1,自引:0,他引:1  
Straightforward compilation of array operations onto massively parallel SIMD machines results in a significant amount of interprocessor data motion. Careful allocation of data across the processors eliminates much of this interprocessor data motion. Researchers are working on extending programming languages to include user directives for specifying good data allocation. Our focus is to automate the data allocation through compiler techniques to achieve portability without sacrificing efficiency. These techniques can be used to fully automate the data allocation process or can be integrated with alignment directives.We present here a complete compiler algorithm for the automatic layout of data to minimize interprocessor data motion. Arrays are aligned by mapping them onto the processors based on their usage. Arrays may be mapped differently in different sections of the program, eliminating much of the interprocessor data motion resulting from a static mapping of arrays. We describe an integrated technique for determining the alignment of arrays locally within regions of the program and minimizing communication globally among these regions. This technique starts with the alignments specified by the directives, if any, and determines the alignment for the remaining arrays.The algorithms proposed in this paper were used in the SIMD compilers at Compass, Inc. Preliminary results from the initial implementation of the data optimization techniques described here suggest a significant decrease of the interprocessor data motion. More analysis is required to better understand the range of expected gains and the conditions under which those gains are achieved.The research described in this paper was supported in part by the Defense Advanced Research Projects Agency under contracts N00014-87K-0825, F19628-92-C-0045, and N00014-91-J-1698 and in part by a National Science Foundation Presidential Young Investigator Award, grant MIP-8657531, with matching funds from General Electric Corporation, IBM Corporation, and AT&T.  相似文献   

8.
We present anO(n 2 log3 n) algorithm for the two-center problem, in which we are given a setS ofn points in the plane and wish to find two closed disks whose union containsS so that the larger of the two radii is as small as possible. We also give anO(n 2 log5 n) algorithm for solving the two-line-center problem, where we want to find two strips that coverS whose maximum width is as small as possible. The best previous solutions of both problems requireO(n 3) time.Pankaj Agarwal has been supported by DIMACS (Center for Discrete Mathematics and Theoretical Computer Science), an NSF Science and Technology Center, under Grant STC-88-09648. Micha Sharir has been supported by the Office of Naval Research under Grants N00014-89-J-3042 and N00014-90-J-1284, by the National Science Foundation under Grant CCR-89-01484, by DIMACS, and by grants from the US-Israeli Binational Science Foundation, the Fund for Basic Research administered by the Israeli Academy of Sciences, and the G.I.F., the German-Israeli Foundation for Scientific Research and Development. A preliminary version of this paper has appeared inProceedings of the Second Annual ACM-SIAM Symposium on Discrete Algorithms, 1991, pp. 449–458.  相似文献   

9.
Constraint query algebras   总被引:7,自引:0,他引:7  
Constraint query languages are natural extensions of relational database query languages. A framework for their declarative specification (constraint calculi) and efficient implementation (low data complexity and secondary storage indexing) was presented in Kanellakis et al., 1995. Constraint query algebras form a procedural language layer between high-level declarative calculi and low-level indexing methods. Just like the relational algebra, this intermediate layer can be very useful for program optimization. In this paper, we study properties of constraint query algebras, which we present through three concrete examples. The dense order constraint algebra illustrates how the appropriate canonical form can simplify expensive operations, such as projection, and facilitate interaction with updates. The monotone two-variable linear constraint algebra illustrates the concept of strongly polynomial operations. Finally, the lazy evaluation of (non)linear constraint algebras illustrates how large numbers of (non)linear constraints could be implemented with only a small amount of costly symbolic processing.Paris C. Kanellakis, one of the authors of this paper, died in a tragic accident shortly after the completion of the first draft. We thank all the reviewers, whose comments were invaluable in helping us complete the work. We also thank Raghu Ramakrishnan for making a last-minute review of the final draft. Research supported by ONR Contracts N00014-94-1-1153 and N00014-91-J-4052, ARPA Order 8225, and by NSF Grant IRI-9509933.  相似文献   

10.
In this paper we describe anO(logN)-bit-step randomized algorithm for bit-serial message routing on a hypercube. The result is asymptotically optimal, and improves upon the best previously known algorithms by a logarithmic factor. The result also solves the problem of on-line circuit switching in anO(1)-dilated hypercube (i.e., the problem of establishing edge-disjoint paths between the nodes of the dilated hypercube for any one-to-one mapping).Our algorithm is adaptive and we show that this is necessary to achieve the logarithmic speedup. We generalize the Borodin-Hopcroft lower bound on oblivious routing by proving that any randomized oblivious algorithm on a polylogarithmic degree network requires at least (log2 N/log logN) bit steps with high probability for almost all permutations.This research was supported by the Defense Advanced Research Projects Agency under Contracts N00014-87-K-825 and N00014-89-J-1988, the Air Force under Contract AFOSR-89-0271, and the Army under Contract DAAL-03-86-K-0171. This work was completed while the third and fourth authors were at the Laboratory for Computer Science, Massachusetts Institute of Technology.  相似文献   

11.
A study of affine matching with bounded sensor error   总被引:3,自引:3,他引:0  
Affine transformations of the plane have been used in a number of model-based recognition systems. Because the underlying mathematics are based on exact data, in practice various heuristics are used to adapt the methods to real data where there is positional uncertainty. This paper provides a precise analysis of affine point matching under uncertainty. We obtain an expression for the range of affine-invariant values that are consistent with a given set of four points, where each image point lies in an -disc of uncertainty. This range is shown to depend on the actualx-y-positions of the data points. In other words, given uncertainty in the data there are no representations that are invariant with respect to the Cartesian coordinate system of the data. This is problematic for methods, such as geometric hashing, that are based on affine-invariant representations. We also analyze the effect that uncertainty has on the probability that recognition methods using affine transformations will find false positive matches. We find that there is a significant probability of false positives with even moderate levels of sensor error, suggesting the importance of good verification techniques and good grouping techniques.This report describes research done in part at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory's research is provided in part by an ONR URI grant under contract N00014-86-K-0685, and in part by DARPA under Army contract number DACA76-85-C-0010 and under ONR contracts N00014-85-K-0124 and N00014-91-J-4038. WELG is supported in part by NSF contract number IRI-8900267. DPH is supported at Cornell University in part by NSF grant IRI-9057928 and matching funds from General Electric and Xerox, and in part by AFOSR under contract AFOSR-91-0328.  相似文献   

12.
Extending a blackboard architecture for approximate processing   总被引:2,自引:1,他引:1  
Approximate processing is an approach to real-time AI problem-solving systems in domains where there are a range of acceptable answers in terms of certainty, accuracy, and completeness. Such a system needs to evaluate the current situation, make time predictions about the likelihood of achieving current objectives, monitor the processing and pursuit of those objectives, and if necessary, choose new objectives and associated processing strategies that are achievable in the available time. In this approach, the system is performingsatisficing problem-solving, in that it is attempting to generate the best possible solutions within available time and computational resource constraints.Previously published work (Lesser, Pavlin and Durfee 1988) has dealt with this approach to real-time; however, an important aspect was not fully developed: the problem solver must be very flexible in its ability to represent and efficiently implement a variety of processing strategies. Extensions to the blackboard model of problem solving that facilitate approximate processing are demonstrated for the task of knowledge-based signal interpretation. This is accomplished by extending the blackboard model of problem solving to include data, knowledge, and control approximations. With minimal overhead, the problem solver dynamically responds to the current situation by altering its operators and state space abstraction to produce a range of acceptable answers. Initial experiments with this approach show promising results in both providing a range of processing algorithms and in controlling this dynamic system with low overhead.This work was partly supported by the Office of Naval Research under a University Research Initiative grant, number N00014-86-K-0764, NSF-CER contract DCR-8500332, and ONR contract N00014-89-J-1877.  相似文献   

13.
In many applications of knowledge based systems, the initial data are insufficient to fulfill inference. In that case, knowledge based systems ask users questions in order to acquire more information. When and what to ask is determined by a question-asking strategy. This paper deals with question-asking strategies for a Horn system in which the response costs of the questions and the probablistic estimates of the answers are given. We introduce a question sequencing rule and enhance an efficient question-asking strategy. Our computational experiments show that the proposed question-asking strategy is very effective.Supported in part by the Office of Naval Research grant N00014-94-1-0355.Supported in part by the Office of Naval Research grant N00014-95-0639.  相似文献   

14.
Most proof methods for reasoning about concurrent programs are based upon theinterleaving semantics of concurrent computation: a concurrent program is executed in a stepwise fashion, with only one enabled action being executed at each step. Interleaving semantics, in effect, requires that a concurrent program be executed as a nondeterministic sequential program. This is clearly an abstraction of the way in which concurrent programs are actually executed. To ensure that this is a reasonable abstraction, interleaving semantics should only be used to reason about programs with simple actions; we call such programs atomic. In this paper, we formally characterise the class of atomic programs. We adopt the criterion that a program isatomic if it can be implemented in a wait-free, serialisable manner by a primitive program. A program isprimitive if each of its actions has at most one occurrence of a shared bit, and each shared bit is read by at most one process and written by at most one process. It follows from our results that the traditionally accepted atomicity criterion, which allows each action to have at most one occurrence of a shared variable, can be relaxed, allowing programs to have more powerful actions. For example, according to our criterion, an action can read any finite number of shared variables, provided it writes no shared variable.Work supported, in part, at the University of Texas at Austin by Office of Naval Research Contract N00014-89-J-1913, and at the University of Maryland by an award from the University of Maryland General Research Board.Work supported, in part, by Office of Naval Research Contract N00014-89-J-1913.  相似文献   

15.
Augmented infinitesimal perturbation analysis (APA) was introduced by Gaivoronski [1991] to increase the purview of the theory of Infinitesimal Perturbation Analysis (IPA). In reference [Gaivoronski 1991] it is shown that an unbiased estimate for the gradient of a class of performance measures of DEDS represented bygeneralized semi-Markov processes (GSMPs) (cf. [Glynn 1989] can be expressed as a sum of an IPA-estimate and a term that takes into account the event order changes. In this paper we present an alternate approach to establishing the result of Gaivoronski, and from this we derive a necessary and sufficient condition for the validity of the IPA algorithm for this class of performance measures. Finally we validate our results by simulation examples.This research was supported by the National Science Foundation under grant number ECS-85-15449, Office of Naval Research Grants Nos. N00014-90-K-1093 and N00014-89-J-1023 and by Army Grant No. DAAL-03-86-K-0171.  相似文献   

16.
Modern computer-based systems have many required characteristics, including performance, concurrency, timeliness, availability, dependability, safety and security. Aging computer-based systems are being reengineered to exploit distributed and highly parallel hardware platforms, in order to accommodate increased functional requirements and to achieve dependability. Simultaneously, reengineering is introducing modern software engineering principles such as component layering, encapsulation and information hiding, in order to reduce maintenance burdens and to simplify future enhancement. This paper summarizes a process for reengineering of computer-based systems to achieve these goals. The process incorporates experience gained through the reengineering of the Weapons Selection module and the Surface Operations module of the AEGIS Weapon System. A key component of the reengineering process is themission critical software architecture (MCSA), a hierarchical view of systems that includes the abstraction levels of programs, tasks, packages, procedures, and statements. To guide the reengineering processes of software transformation, concurrency metrics are defined at the procedure, package and task levels of the MCSA. The paper defines the intermediate representation (IR) needed to compute the metrics, and shows how to use the IR to compute several metrics: the percentage of concurrency within procedures and packages, and the amount of potential concurrency among tasks. The metrics are important in any reengineering approach that iteratively transforms design and code, or that deals with system configuration (the integration of software, hardware and humanware).This work is supported in part by the US NSWC (N00178-95-R-2007, N60921-94-M-6096, N60921-94-M-1960, N60921-94-M-2555, N60921-95-M-0311), by the State of New Jersey (SBR-421290), and by the US ONR(N00014-92-J-1367).  相似文献   

17.
    
In this paper we apply techniques from computational geometry to solve several problems in grasp planning and control in robotics. We consider the problem of calculating force targets for a collection ofn fingers which grasp a two-dimensional object at known positions, at which the normals to the surface are also assumed to be known at least approximately. If the points at which the fingers touch the body do not allow apositive grip to be exerted (i.e., a grip in which the fingers hold the body in equilibrium by exerting friction-free forces in the directions of the corresponding inward-directed normals), it is appropriate to find the smallest coefficient of friction for which it is possible to assign a set of forces to be exerted by the fingers (so-calledfinger-force targets) which hold the object at equilibrium and such that each individual force lies within the corresponding cone of friction. We present an algorithm for this problem which runs in time0(n log2 n log logn). We also present another algorithm for preprocessing the given data so as to allow fast computation of the desired coefficient of friction for the case in which one needs to balance any given query external force and torque. Finally, we discuss simpler variants of our techniques which are likely to be more efficient when the problem is solved for a small number of fingers.Work on this paper has been supported by Office of Naval Research Grants N00014-87-K-0129, N00014-89-J-3042, and N00014-90-J-1284, and by National Science Foundation Grants DCR-83-20085 and CCR-89-01484. Work by the second author has also been supported by research grants from the NCRD—the Israeli National Council for Research and Development, the U.S.-Israeli Binational Science Foundation, and the Fund for Basic Research administered by the Israeli Academy of Sciences. A preliminary version of this paper has appeared in theProceedings of the 25th Annual Allerton Conference on Communication, Control and Computing, September 1987, pp. 843–848.  相似文献   

18.
In this paper we present an O(1/ logn)-time parallel algorithm for computing the convex hull ofn points in 3. This algorithm usesO(@#@ n1+a) processors on a CREW PRAM, for any constant 0 < 1. So far, all adequately documented parallel algorithms proposed for this problem use time at least O(log2 n). In addition, the algorithm presented here is the first parallel algorithm for the three-dimensional convex hull problem that is not based on the serial divide-and-conquer algorithm of Preparata and Hong, whose crucial operation is the merging of the convex hulls of two linearly separated point sets. The contributions of this paper are therefore (i) an O(logn)-time parallel algorithm for the three-dimensional convex hull problem, and (ii) a parallel algorithm for this problem that does not follow the traditional paradigm.This paper was presented in preliminary form at the 9th Annual ACM Symposium on Computational Geometry, San Diego, CA, May 1993 [32]. The work of N. M. Amato was supported in part by an AT&T Bell Laboratories Graduate Fellowship, the Joint Services Electronics Program (U.S. Army, U.S. Navy, U.S. Air Force) under Contract N00014-90-J-1270, and NSF Grant CCR-89-22008. This work was done while N. M. Amato was with the Department of Computer Science at the University of Illinois. The work of F. P. Preparata was supported in part by NSF Grants CCR-91-96152, CCR-91-96176, and ONR Contract N00014-91-J-4052, ARPA order 8225.  相似文献   

19.
LetP be a simple polygon withn vertices. We present a simple decomposition scheme that partitions the interior ofP intoO(n) so-called geodesic triangles, so that any line segment interior toP crosses at most 2 logn of these triangles. This decomposition can be used to preprocessP in a very simple manner, so that any ray-shooting query can be answered in timeO(logn). The data structure requiresO(n) storage andO(n logn) preprocessing time. By using more sophisticated techniques, we can reduce the preprocessing time toO(n). We also extend our general technique to the case of ray shooting amidstk polygonal obstacles with a total ofn edges, so that a query can be answered inO( logn) time.Work by Bernard Chazelle has been supported by NSF Grant CCR-87-00917. Work by Herbert Edelsbrunner has been supported by NSF Grant CCR-89-21421. Work by Micha Sharir has been supported by ONR Grants N00014-89-J-3042 and N00014-90-J-1284, by NSF Grant CCR-89-01484, and by grants from the U.S.-Israeli Binational Science Foundation, the Fund for Basic Research administered by the Israeli Academy of Sciences, and the G.I.F., the German-Israeli Foundation for Scientific Research and Development.  相似文献   

20.
A priority inversion occurs when a low-priority task causes the execution of a higher-priority task to be delayed. The possibility of priority inversions complicates the analysis of systems that use priority-based schedulers because priority inversions invalidate the assumption that a task can be delayed by only higher-priority tasks. This paper formalizes priority inversion and gives sufficient conditions as well as some new protocols for preventing priority inversions.Supported by the Commission of the European Communities under the ESPRIT Programme Basic Research Action Number 3092 (Predictably Dependable Computing Systems) and the Italian Ministry of Research and University, and in part by the Defense Advanced Research Projects Agency (DoD) under NASA Ames grant number NAG-2-593.Supported in part by the Defense Advanced Research Projects Agency (DoD) under NASA Ames grant number NAG 2-593, and by grants from IBM T.J. Watson Research Laboratory, the IBM Endicott Programming Laboratory, Siemens RTL, and Xerox Webster Research Center.Supported in part by the Office of Naval Research under contract N00014-91-J-1219, the National Science Foundation under Grant No. CCR-8701103, DARPA/NSF Grant No. CCR-9014363, and by the IBM Endicott Programming Laboratory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号