首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 48 毫秒
1.
We describe a modular programming style that harnesses modern type systems to verify safety conditions in practical systems. This style has three ingredients:
(i) A compact kernel of trust that is specific to the problem domain.
(ii) Unique names (capabilities) that confer rights and certify properties, so as to extend the trust from the kernel to the rest of the application.
(iii) Static (type) proxies for dynamic values.
We illustrate our approach using examples from the dependent-type literature, but our programs are written in Haskell and OCaml today, so our techniques are compatible with imperative code, native mutable arrays, and general recursion. The three ingredients of this programming style call for (1) an expressive core language, (2) higher-rank polymorphism, and (3) phantom types.
Keywords: Modular programming; verification; safety property; static types  相似文献   

2.
Sheinwald, Lempel, and Ziv (1995,Inform. and Comput.116, 128–133) proved that the power of off-line coding is not useful if we want on-line decodable files, as far as asymptotical results are concerned. In this paper, we are concerned with the finite case and consider the notion of on-line decodable optimal parsing based on the parsing defined by the Ziv–Lempel (LZ2) compression algorithm. De Agostino and Storer (1996,Inform. Process. Lett.59, 169–174) proved the NP-completeness of computing the optimal parsing and that a sublogarithmic factor approximation algorithm cannot be realized on-line. We show that the Ziv–Lempel algorithm and two widely used practical implementations produce an O(n1/4) approximation of the optimal parsing, wherenis the length of the string. By working with de Bruijn sequences, we show also infinite families of binary strings on which the approximation factor isΘ(n1/4).  相似文献   

3.
In current networks, packet losses can occur if routers do not provide sufficiently large buffers. This paper studies how many buffers should be provided in a router to eliminate packet losses. We assume a network router has m incoming queues, each corresponding to a single traffic stream, and must schedule at any time on-line from which queue to take the next packet to send out. To exclude packet losses with a small amount of buffers, the maximum queue length must be kept low over the entire scheduling period. We call this new on-line problem the balanced scheduling problem (BSP). By competitive analysis, we measure the power of on-line scheduling algorithms to prevent packet losses. We show that a simple greedy algorithm is Θ(log m)-competitive which is asymptotically optimal, while Round-Robin scheduling is not better than m-competitive, as actually is any deterministic on-line algorithm for BSP. We also give a polynomial time algorithm for solving off-line BSP optimally. We also study another on-line balancing problem that tries to balance the delay among the m traffic streams.  相似文献   

4.
We study the problem of scheduling a parallel computation so as to minimize the maximum number of data items extant at any point in the execution. Computations are expressed as directed graphs, where nodes represent primitive operations and arcs represent data dependences. The result of an operation is extant after the operation executes and until all immediate successors have begun execution. Our goal is to schedule computations so as to minimize both the maximum space required for extant data and the overall completion time.The classical problem of multiprocessor scheduling with precedence constraints is a special case of our problem, obtained by disregarding the data-space constraint. This special case is NP-complete for general graphs; a time-optimal multiprocessor scheduling algorithm is known only for the class of arbitrary trees. For this same class of arbitrary trees we present a multiprocesssor scheduling algorithm where the completion time is optimal within a constant factor, while the data-space size exceeds the optimal by a factor not greater than the number of processors.For an arbitrary n-node precedence tree T of in-degree Δ, we present:
(1)an algorithm for evaluating the lower bound on the size of data space required for executing T, regardless of the completion time or number of processors;
(2)a proof that the lower bound of Part 1 may be as large as (Δ−1)lgn but not larger;
(3)a single-processor schedule that executes T in time that equals the optimal, while creating the data space of size equal to the lower bound of Part 1;
(4)an ω-processor schedule that executes T in time not exceeding three times the optimal, while creating the data space of size that exceeds the lower bound of Part 1 by a factor not greater than ω.
(5)a proof that for every number of processors ω and for every 0<ε1, there exist infinitely many trees such that every ω-processor schedule that executes any of these trees in time not exceeding (2−ε) times the optimal requires a token space as large as that created by the schedule of Part 4, while the schedule of Part 4 executes every such tree in optimal time.
The family of complete binary trees provides an example where our schedule achieves an exponential improvement in the size of the data space, compared to that of the classical time-optimal schedule.  相似文献   

5.
We show the following:
(i) In existing anonymous credential revocation systems, the revocation authority can link the transactions of any user in a subset T of users in O(log|T|) fake failed sessions.
(ii) A concern about the DLREP-I anonymous credentials system described in [Stefan Brands: Rethinking public key infrastructure and Digital Certificates; The MIT Press, Cambridge Massachusetts, London England. ISBN 0-262-02491-8] and [Stefan Brands: A Technical Overview of Digital Credentials; February 2002 (was a white paper in credentica.com)].
Keywords: Anonymous credential system; trust certification; DLREP-I  相似文献   

6.
This paper deals with vector covering problems in d -dimensional space. The input to a vector covering problem consists of a set X of d -dimensional vectors in [0,1] d . The goal is to partition X into a maximum number of parts, subject to the constraint that in every part the sum of all vectors is at least one in every coordinate. This problem is known to be NP-complete, and we are mainly interested in its on-line and off-line approximability. For the on-line version, we construct approximation algorithms with worst case guarantee arbitrarily close to 1/(2d) in d≥ 2 dimensions. This result contradicts a statement of Csirik and Frenk in [5] where it is claimed that, for d≥ 2 , no on-line algorithm can have a worst case ratio better than zero. Moreover, we prove that, for d≥ 2 , no on-line algorithm can have a worst case ratio better than 2/(2d+1) . For the off-line version, we derive polynomial time approximation algorithms with worst case guarantee Θ(1/ log d) . For d=2 , we present a very fast and very simple off-line approximation algorithm that has worst case ratio 1/2 . Moreover, we show that a method from the area of compact vector summation can be used to construct off-line approximation algorithms with worst case ratio 1/d for every d≥ 2 . Received November 1996; revised March 1997.  相似文献   

7.
We show how to use various notions of genericity as tools in oracle creation. In particular,
1. we give an abstract definition of genericity that encompasses a large collection of different generic notions;
2. we consider a new complexity class AWPP, which contains BQP (quantum polynomial time), and infer several strong collapses relative to -generics;
3. we show that under additional assumptions these collapses also occur relative to Cohen generics;
4. we show that relative to -generics, ULIN∩co-ULINDTIME(nk) for any k, where ULIN is unambiguous linear time, despite the fact that UP(NP∩co-NP)P relative to these generics;
5. we show that there is an oracle relative to which NP/1∩co-NP/1(NP∩co-NP)/poly; and
6. we use a specialized notion of genericity to create an oracle relative to which
NPBPPMA.
Author Keywords: Complexity classes; Relativization; Generic oracles; Genericity; Forcing  相似文献   

8.
In current networks, packet losses can occur if routers do not provide sufficiently large buffers. This paper studies how many buffers should be provided in a router to eliminate packet losses. We assume a network router has m incoming queues, each corresponding to a single traffic stream, and must schedule at any time on-line from which queue to take the next packet to send out. To exclude packet losses with a small amount of buffers, the maximum queue length must be kept low over the entire scheduling period. We call this new on-line problem the balanced scheduling problem (BSP). By competitive analysis, we measure the power of on-line scheduling algorithms to prevent packet losses. We show that a simple greedy algorithm is (log m)-competitive which is asymptotically optimal, while Round-Robin scheduling is not better than m-competitive, as actually is any deterministic on-line algorithm for BSP. We also give a polynomial time algorithm for solving off-line BSP optimally. We also study another on-line balancing problem that tries to balance the delay among the m traffic streams.  相似文献   

9.
In this paper, the on-line motion planning of articulated robots in dynamic environment is investigated. We propose a practical on-line robot motion planning approach that is based upon pre-computing the global configuration space (C-space) connectivity with respect to all possible obstacle positions. The proposed motion planner consists of an off-line stage and an on-line stage. In the off-line stage, the obstacles in the C-space (C-obstacle) with respect to the obstacle positions in the workspace are computed, which are then stored using a hierarchical data structure with non-uniform 2m trees. In the on-line stage, the real obstacle cells in the workspace are identified and the corresponding 2m trees from the pre-computed database are superposed to construct the real-time C-space. The collision-free path is then searched in this C-space by using the A* algorithm under a multi-resolution strategy which has excellent computational efficiency. In this approach, the most time-consuming operation is performed in the off-line stage, while the on-line computing only need to deal with the real-time obstacles occurring in the dynamic environment. The minimized on-line computational cost makes it feasible for real-time on-line motion planning. The validity and efficiency of this approach is demonstrated using manipulator prototypes with 5 and 7 degree-of-freedom.  相似文献   

10.
11.
On-Line Load Balancing and Network Flow   总被引:1,自引:0,他引:1  
In this paper we study two problems that can be viewed as on-line games on a dynamic bipartite graph. The first problem is on-line load balancing with preemption. A centralized scheduler must assign tasks to servers, processing on-line a sequence of task arrivals and departures. Each task is restricted to run on some subset of the servers. The scheduler attempts to keep the load well-balanced. If preemptive reassignments are disallowed, Azar et al. [3] proved a lower bound of Ω(n 1/2 ) on the ratio between the maximum load achieved by an on-line algorithm and the optimum off-line maximum load. We show that this ratio can be greatly reduced by an efficient scheduler using only a small amount of rescheduling. We then apply these ideas to network flow. Cheriyan and Hagerup [6] introduced an on-line game on a bipartite graph as a fundamental step in improving algorithms for computing the maximum flow in networks. They described a randomized strategy to play the game. King et al. [11] studied a modified version of this game, called ``node kill,' and gave a deterministic strategy. We obtain an improved deterministic algorithm for the node kill game (and hence for maximum flow) in all but the sparsest graphs. The running time achieved is O(mn log m/n n+n 2 log 2+ε n) , compared with King et al.'s O(mn+n 2+ε ) . These problems combine a demand for good competitive ratios with more traditional requirements of implementation efficiency. Our solutions deal with the tradeoffs between these measures. Received March 15, 1997; revised April 20, 1997.  相似文献   

12.
13.
《国际计算机数学杂志》2012,89(3-4):149-153
The Aho-Corasick algorithm is a well-known method of determining the occurrences of one of several given pattern strings in a given text string. We address the question of augmenting the pattern matching machine constructed by this algorithm with a new pattern string, both on-line and off-line. We show that augmenting a machine of N nodes with a new pattern string of length m takes Θ(mN) time on-line and Θ(N) time off-line.  相似文献   

14.
15.
16.
New approximation algorithms for packing rectangles into a semi-infinite strip are introduced in this paper. Within a standard probability model, an asymptotic average-case analysis is given for the wasted space in the packings produced by these algorithms.An off-line algorithm is presented along with a proof that it wastes (/n)space on the average, wheren is the number of rectangles packed. This result is known to apply to optimal packings as well. Several on-line shelf algorithms are also analyzed. Withn assumed known in advance, one such algorithm is described and shown to waste (n 2/3) space on the average. It is proved that this result also characterizes optimal on-line shelf packings. For a very general class of linear-time algorithms, it is shown that a constant (nonzero) fraction of the space must be wasted on the average for alln, and a lower bound on this fraction in terms of algorithm parameters is given. Finally, the paper discusses the implications of the above results for dynamic packing and two-dimensional bin-packing problems.  相似文献   

17.
In this paper one- and two-dimensional processor arrays for the orthogonal solution of systems of linear equations are presented. Each processor cell of these processor arrays is able to carry out a Givens rotation. By using a modified Givens rotation algorithm only multiplications and additions must be executed in these processor cells. Compared to previous approaches for an orthogonal solution of systems of linear equations, the presented processor arrays have the following advantages:
• -the hardware in an individual processor cell consists only of multipliers and adders.
• -almost full utilization (asymptotically 100%) of these hardware components in an active processor cell can be achieved.
• -the latency time for one GR, which determines the data pulse frequency of the processor arrays, can be reduced from O(b) to O(log2b), where b is the length of a dataword.
  相似文献   

18.
This issue contains the Proceedings of the First International Workshop on Parallel and Distributed Model Checking (PDMC 2002) held in Brno, Czech Republic, August 19, 2002 as a satellite event to the 13th International Conference on Concurrency Theory (CONCUR 2002).The aim of the PDMC 2002 workshop was to cover all aspects of parallel and distributed model checking and supporting techniques. The mission was on one side to introduce people to the field of parallel and distributed model checking and on the other side to be a working forum for describing new research building thus the relationship between people working in the area of parallel and distributed approaches to the verification of large-scale systems and to encourage cross-fertilization of ideas.The workshop programme consisted of:
2 invited talks, respectively by Moshe Vardi on Model Checking: A Complexity-Theoretic Perspective and by Orna Grumberg on Different directions in parallel and distributed model checking,
8 regular presentations, and
5 short presentations.
The proceedings contains the regular papersonly. The regular papers and short presentations have been accepted by the following programme committee:
Lubos Brim (MU Brno, Czech Republic) - Co-chair
Gianpiero Cabodi (Torino, Italy)
Hubert Garavel (INRIA, France)
Orna Grumberg (Technion, Israel) - Co-chair
Boudewijn R. Haverkort (Aachen, Germany)
Kim Larsen (BRICS Aalborg, Denmark)
Rajeev Ranjan (Real Intent, USA)
We would like to thank very much to all members of the programme committee for nice cooperation and for detailed reports and comments and to all the authors for their contributions. For online-information see http://www.fi.muni.cz/concur2002/PDMC/.
August 19, 2002Lubos Brim
Orna Grumberg
  相似文献   

19.
Many of the problems addressed through engineering analysis include a set of regulatory (or other) probabilistic requirements that must be demonstrated with some degree of confidence through the analysis. Problems cast in this environment can pose new challenges for computational analyses in both model validation and model-based prediction. The “regulatory problems” given for the “Sandia challenge problems exercise”, while relatively simple, provide an opportunity to demonstrate methods that address these challenges. This paper describes and illustrates methods that can be useful in analysis of the regulatory problem. Specifically, we discuss:
(1) an approach for quantifying variability and uncertainty separately to assess the regulatory requirements and provide a statement of confidence; and
(2) a general validation metric to focus the validation process on a specific range of the predictive distribution (the predictions near the regulatory threshold).
These methods are illustrated using the challenge problems. Solutions are provided for both the static frame and structural dynamics problems.
Keywords: Regulatory problem; Calibration; Model validation; Model-based prediction  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号