首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Continuous consensus (CC) is the problem of maintaining up-to-date and identical copies of a “core” of information about the past at all correct processes in the system (Mizrahi and Moses, 2008 [6]). This is a primitive that supports simultaneous coordination among processes, and eliminates the need for issuing separate instances of consensus for different tasks. Recent work has presented new simple and efficient optimum protocols for continuous consensus in the crash and (sending) omissions failure models. For every pattern of failures, these protocols maintain at each and every time point a core that subsumes that maintained by any other continuous consensus protocol. This paper considers the continuous consensus problem in the face of harsher failures: general omissions and authenticated Byzantine failures. Computationally efficient optimum protocols for CC do not exist in these models if PNP. A variety of CC protocols are presented. The first is a simple protocol that enters every interesting event into the core within t+1 rounds (where t is the bound on the number of failures), provided there are a majority of correct processes. The second is a protocol that achieves similar performance so long as n>t (i.e., there is always guaranteed to be at least one correct process). The final protocol makes use of active failure monitoring and failure detection to include events in the core much faster in many runs of interest. Its performance is established based on a nontrivial property of minimal vertex covers in undirected graphs. The results are adapted to the authenticated Byzantine failure model, in which it is assumed that faulty processes are malicious, but correct processes have unforgeable signatures. Finally, the problem of uniform CC is considered. It is shown that a straightforward version of uniform CC is not solvable in the setting under study. A weaker form of uniform CC is defined, and protocols achieving it are presented.  相似文献   

2.
3.
Diffusion of electronic mail (e-mail) is not yet universal. So far, e-mail has been implemented successfully within organisations, but its implementation for communications between organisations has been rather limited. This situation is surprising, given the great potential of e-mail for interorganisational communication. E-mail encounters from a user's point of view, reviewed in this paper, suggest that users of BITNET, one of the predominant e-mail networks in the academic world, face difficulties while interacting with e-mail. These include addressing difficulties, unreliability issues, medium limitations, and interface problems. BITNET is just one of many interorganisational networks and may not be representative. Still, e-mail technology is unlikely to survive if human engineering and reliability are not uniformly satisfactory across all e-mail systems. Poorly engineered e-mail systems frustrate not only their users, but also users of other networks because of gateways between the networks. Therefore, e-mail users might resort to other communication media like facsimile or the telephone, and abandon e-mail altogether.

For e-mail to be competitive in the communication arena, an interdisciplinary effort should be directed toward standardisation of features like better addressing conventions, international user directories, uniform user interfaces, and sophisticated management of e-mail messages.  相似文献   


4.
Michael Erdmann 《Algorithmica》1993,10(2-4):248-291
This paper explores the use of randomization as a primitive action in the solution of robot tasks. An example of randomization is the strategy of shaking a bin containing a part in order to orient the part in a desired stable state with some high probability. Further examples include tapping, vibrating, twirling, and random search. For instance, it is sometimes beneficial for a system to execute random motions purposefully when the precise motions required to perform an operation are unknown, as when they lie below the available sensor resolution.The purpose of this paper is to provide a theoretical framework for the planning and execution of randomized strategies for robot tasks. This framework is built on the standard backchaining approach of dynamic programming. Specifically, a randomizing planner backchains from the goal in a state space whose states describe the knowledge available to the system at run-time. By choosing random actions in a principled manner at run-time, a system can sometimes obtain a probabilistic strategy for accomplishing a task even when no guaranteed strategy exists for accomplishing that task. In other cases, the system may be able to obtain a speedup over an existing guaranteed strategy.The main result of this paper consists of two examples. One example shows that randomization can sometimes speed up task completion from exponential time to polynomial time. The other example shows that such a speedup is not always possible.The author is now with the School of Computer Science and the Robotics Institute at Carnegie-Mellon University. The work reported here was performed while at the MIT Artificial Intelligence Laboratory. This work was supported in part by the Office of Naval Research under the University Research Initiative Program through Contract N00014-86-K-0685, by an NSF Presidential Young Investigator Award to Tomás Lozano-Pérez, by a fellowship from NASA's Jet Propulsion Laboratory, and by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research Contract N00014-85-K-0124. Additional support at CMU was provided under NSF grant IRI-9010686.  相似文献   

5.
The rate of parameter convergence in a number of adaptive estimation schemes is related to the smallest eigenvalue of the average information matrix determined by the regression vector. Using a very simple examples, we illustrate that the input signals that maximize this minimum eigenvalue may be quite different from the input signals that optimize more classical input design criteria, e.g. D-optimal criterion.  相似文献   

6.
In the last few years great attention has been concentrated on the consensus algorithm in a network of agents. A consensus problem in which the agreement value is a distributed estimation of some non-constant quantity of interest is referred to as a dynamic consensus. In practical applications an effective network architecture to support sensing and communication between agents is based on a Wireless Sensor Network (WSN). This paper deals with the design of a fast dynamic consensus algorithm when it is implemented over the WSN. A sufficient stability condition of the dynamic consensus algorithm in the presence of heterogeneous time delays affecting communication through the multi hops of the WSN is introduced and used for consensus algorithm gain design. Moreover, the algorithm implementation by the standard AODV routing protocol is discussed and the best parameter setting to reduce the effect of packet collision phenomena on the performance of the consensus algorithm is indicated. Several trade-offs between network parameter setting, sensor node technology selection and application performance have to be taken into account by the designer in the implementation of the dynamic consensus algorithm. A representative simulation based design procedure is presented to validate through realistic simulation experiments the proposed design approach.  相似文献   

7.
In this paper, we study the problem of robust consensus tracking for a class of second-order multi-agent dynamic systems with disturbances and unmodeled agent dynamics. Contrary to previous approaches, we design continuous distributed consensus protocols to enable global asymptotic consensus tracking. Our focus is on consensus protocol design and stability analysis which also leads to the derivation of sufficient conditions for consensus tracking. We first consider the case of undirected information exchange with a symmetric and positive definite information-exchange matrix. We develop an identifier for each agent to estimate the unknown disturbances and unmodeled agent dynamics. Based on the identifier, we develop a consensus tracking protocol to enable global asymptotic consensus tracking using local information obtained from neighboring agents. The closed-loop stability is proven using Lyapunov analysis theory and an invariance-like theorem. We then extend the approach to the case of directed information exchange, whose information-exchange matrix is only of full rank so that the approach for undirected graphs cannot be directly applied. We show that global asymptotic consensus tracking can still be enabled under the new derived sufficient conditions by designing a new identifier, which utilizes the estimated information exchanged from neighboring agents, and constructing a new Lyapunov function. Examples and numerical simulations are provided to validate the effectiveness of the proposed robust consensus tracking method.  相似文献   

8.
9.
10.
Aircraft noise is a major nuisance in residential communities around airports. If the air transport industries are to meet the ever increasing demand for air travel, determined efforts are required now to reduce the burden of noise upon these communities. Significant engine noise reductions have already been achieved in the latest generation of wide-bodied aircraft, and further reductions are being forcast by the engine manufacturers. Regardless of whether there are justifiable grounds for this optimism there are alternative steps to be taken. But the problem is basically an economic rather than a technological one — how much does noise reduction cost and how much can we afford to pay?The various costs of aircraft noise, both monetary and social, are discussed in relation to its effects upon people. Although an economic analysis of the problem is feasible, it is doubtful whether our understanding of the relationships between physical noise levels and human reaction is yet adequate for such purposes. Planning methods for estimating the extent of community noise nuisance are presented, and it is shown that consideration should be given to outlying regions exposed to relatively little aircraft noise.  相似文献   

11.
In fuzzy logic, there are several methods of representing implication in terms of &, ∨, and ¬; in particular, explicit representations define a class of S implications, implicit representations define a class of R implications. Some reasonable implication operations have been proposed, such as Yager's ab, that are difficult to represent as S or R implications. For such operations, a new class of representations has recently been proposed, called A implications, for which the relationship between implications and the basic operations &, ∨, and ¬ is even more complicated. A natural question is: Is this complexity really necessary? In other words, is it true that A operations cannot be described as S or R operations, or they can, but we simply have not found these representations? In this paper we show that yes, the complexity is necessary, because there are operations that cannot be represented in a simpler form. © 1998 John Wiley & Sons, Inc.  相似文献   

12.
We give a time-randomness tradeoff for the quasi-random rumor spreading protocol proposed by Doerr, Friedrich and Sauerwald [SODA 2008] on complete graphs. In this protocol, the goal is to spread a piece of information originating from one vertex throughout the network. Each vertex is assumed to have a (cyclic) list of its neighbors. Once a vertex is informed by one of its neighbors, it chooses a position in its list uniformly at random and then informs its neighbors starting from that position and proceeding in order of the list. Angelopoulos, Doerr, Huber and Panagiotou [Electron. J. Combin. 2009] showed that after rounds, the rumor will have been broadcasted to all nodes with probability 1−o(1).We study the broadcast time when the amount of randomness available at each node is reduced in natural way. In particular, we prove that if each node can only make its initial random selection from every ?-th node on its list, then there exists lists such that steps are needed to inform every vertex with probability at least . This shows that a further reduction of the amount of randomness used in a simple quasi-random protocol comes at a loss of efficiency.  相似文献   

13.
We study Turing machines that are allowed absolutely no space overhead. The only work space the machines have, beyond the fixed amount of memory implicit in their finite-state control, is that which they can create by cannibalizing the input bits’ own space. This model more closely reflects the fixed-sized memory of real computers than does the standard complexity-theoretic model of linear space. Though some context-sensitive languages cannot be accepted by such overhead-free machines, we show that all context-free languages can be accepted nondeterministically in polynomial time with absolutely no space overhead, and that all deterministic context-free languages can be accepted deterministically in polynomial time with absolutely no space overhead.  相似文献   

14.
Solving vector consensus with a wormhole   总被引:1,自引:0,他引:1  
This paper presents a solution to the vector consensus problem for Byzantine asynchronous systems augmented with wormholes. Wormholes prefigure a hybrid distributed system model, embodying the notion of an enhanced part of the system with "good" properties otherwise not guaranteed by the "normal" weak environment. A protocol built for this type of system runs in the asynchronous part, where f out of n/spl ges/3f+1 processes might be corrupted by malicious adversaries. However, sporadically, processes can rely on the services provided by the wormhole for the correct execution of simple operations. One of the nice features of this setting is that it is possible to keep the protocol completely time-free and, in addition, to circumvent the FLP impossibility result by hiding all time-related assumptions in the wormhole. Furthermore, from a performance perspective, it leads to the design of a protocol with a good time complexity.  相似文献   

15.
Consideration was given to the problem of achieving an approximate consensus in the decentralized stochastic dynamic network under incomplete information about the current states of the nodes, measurement delay, and variable structure of links. Solution was based on the protocol of local voting with nonvanishing steps. It was proposed to analyze dynamics of the closed network with the use of the method of averaged models which was extended to the systems with measurement delays. This method enables one to establish good analytical estimates of the permissible length of the step providing the desired accuracy of consensus and appreciably reduce the computational burden of simulation. The results obtained were applied to the analysis of the dynamics of the system of balancing the computer network loading.  相似文献   

16.
Summary We investigate the message complexity of electing a leader in a ring of asynchronous processors. Our work deviates from the previous works on electing a leader in that we consider the effect of link failures. A link is said to fail if some message sent through it never reaches its destination. We distinguish the case where n is known from the case n unknown. Our main result is a O(n · log n) algorithm for electing a leader on a n-processor ring (the case n is known).  相似文献   

17.
This paper deals with fuzzy dynamic fault tree analysis for the electro-mechanical actuator with common-cause failures to provide fast fault location strategy. To fully describe the level of uncertainty of basic events, triangle fuzzy set is used. Temporal operators are defined, algebraic model is used to model and analyze dynamic fault tree of the electro-mechanical actuator and is general to all kind of life distribution. An innovative common-cause gate with incomplete common-cause under consideration is raised and it can also match the algebraic model analysis method. Fuzzy probability importance is computed by level-progressive strategy, and complexity of solving entirely is avoided and calculation time is reduced. The analysis result shows that the method is flexible and effectively done with the reliability analysis of electro-mechanical actuator and can provide suggestions for faults location.  相似文献   

18.
Summary A model for distributed systems with failing components is presented. Each node may fail and during its recovery the load is distributed to other nodes that are up. The model assumes periodic checkpointing for error recovery and testing of the status of other nodes for the distribution of load. We consider the availability of a node, which is the proportion of time a node is available for processing, as the performance measure. A methodology for optimizing the availability of a node with respect to the checkpointing and testing intervals is given. A decomposition approach that uses the steady-state flow balance condition to estimate the load at a node is proposed. Numerical examples are presented to demonstrate the usefulness of the technique. For the case in which all nodes are identical, closed form solutions are obtained.This research was performed while David Finkel and Satish Tripathi were visiting ISEM. Satish Tripathi's research was supported in part by grants from NSF (grant no. DCR-84-05235) and NASA (grant no. NAG5-235), and by Université de Paris-Sud  相似文献   

19.

Background

Acute Respiratory Distress Syndrome (ARDS) results in collapse of alveolar units and loss of lung volume at the end of expiration. Mechanical ventilation is used to treat patients with ARDS or Acute Lung Injury (ALI), with the end objective being to increase the dynamic functional residual capacity (dFRC), and thus increasing overall functional residual capacity (FRC). Simple methods to estimate dFRC at a given positive end expiratory pressure (PEEP) level in patients with ARDS/ALI currently does not exist. Current viable methods are time-consuming and relatively invasive.

Methods

Previous studies have found a constant linear relationship between the global stress and strain in the lung independent of lung condition. This study utilizes the constant stress-strain ratio and an individual patient's volume responsiveness to PEEP to estimate dFRC at any level of PEEP. The estimation model identifies two global parameters to estimate a patient specific dFRC, β and . The parameter β captures physiological parameters of FRC, lung and respiratory elastance and varies depending on the PEEP level used, and is the gradient of β vs. PEEP.

Results

dFRC was estimated at different PEEP values and compared to the measured dFRC using retrospective data from 12 different patients with different levels of lung injury. The median percentage error is 18% (IQR: 6.49) for PEEP = 5 cm H2O, 10% (IQR: 9.18) for PEEP = 7 cm H2O, 28% (IQR: 12.33) for PEEP = 10 cm H2O, 3% (IQR: 2.10) for PEEP = 12 cm H2O and 10% (IQR: 9.11) for PEEP = 15 cm H2O. The results were further validated using a cross-correlation (N = 100,000). Linear regression between the estimated and measured dFRC with a median R2 of 0.948 (IQR: 0.915, 0.968; 90% CI: 0.814, 0.984) over the N = 100,000 cross-validation tests.

Conclusions

The results suggest that a model based approach to estimating dFRC may be viable in a clinical scenario without any interruption to ventilation and can thus provide an alternative to measuring dFRC by disconnecting the patient from the ventilator or by using advanced ventilators. The overall results provide a means of estimating dFRC at any PEEP levels. Although reasonable clinical accuracy is limited to the linear region of the static PV curve, the model can evaluate the impact of changes in PEEP or other mechanical ventilation settings.  相似文献   

20.
《Information Systems》2005,30(1):21-46
This paper addresses some of the foundational issues associated with discovering the best few correlations from a database. Specifically, we consider the computational complexity of various definitions of the “top-k correlation problem,” where the goal is to discover the few sets of events whose co-occurrence exhibits the smallest degree of independence. Our results show that many rigorous definitions of correlation lead to intractable and strongly inapproximable problems. Proof of this inapproximability is significant, since similar problems studied by the computer science theory community have resisted such analysis. One goal of the paper (and for future research) is to develop alternative correlation metrics whose use will both allow efficient search and produce results that are satisfactory for users.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号