共查询到20条相似文献,搜索用时 0 毫秒
1.
A new method is described and tested for using an unreliable character recognition device to produce a reliable index for a collection of documents. All highly likely substitution errors of the recognition device are handled by transforming characters which confuse readily into the same pseudocharacter. An analysis of the method is done showing the expected precision (fraction of words correctly found to words present) and recall (fraction of words retrieved properly to those which were retrieved). Published substitution error matrices were employed, along with a large file of words and word frequencies to evaluate the method. Performance was surprisingly good. Suggestions for further enhancements are given. 相似文献
2.
Keren Censor-Hillel Seth Gilbert Fabian Kuhn Nancy Lynch Calvin Newport 《Distributed Computing》2014,27(1):1-19
In this paper we study the problem of building a constant-degree connected dominating set (CCDS), a network structure that can be used as a communication backbone, in the dual graph radio network model (Clementi et al. in J Parallel Distrib Comput 64:89–96, 2004; Kuhn et al. in Proceedings of the international symposium on principles of distributed computing 2009, Distrib Comput 24(3–4):187–206 2011, Proceedings of the international symposium on principles of distributed computing 2010). This model includes two types of links: reliable, which always deliver messages, and unreliable, which sometimes fail to deliver messages. Real networks compensate for this differing quality by deploying low-layer detection protocols to filter unreliable from reliable links. With this in mind, we begin by presenting an algorithm that solves the CCDS problem in the dual graph model under the assumption that every process $u$ is provided with a local link detector set consisting of every neighbor connected to $u$ by a reliable link. The algorithm solves the CCDS problem in $O\left( \frac{\varDelta \log ^2{n}}{b} + \log ^3{n}\right) $ rounds, with high probability, where $\varDelta $ is the maximum degree in the reliable link graph, $n$ is the network size, and $b$ is an upper bound in bits on the message size. The algorithm works by first building a Maximal Independent Set (MIS) in $\log ^3{n}$ time, and then leveraging the local topology knowledge to efficiently connect nearby MIS processes. A natural follow-up question is whether the link detector must be perfectly reliable to solve the CCDS problem. With this in mind, we first describe an algorithm that builds a CCDS in $O(\varDelta $ polylog $(n))$ time under the assumption of $O(1)$ unreliable links included in each link detector set. We then prove this algorithm to be (almost) tight by showing that the possible inclusion of only a single unreliable link in each process’s local link detector set is sufficient to require $\varOmega (\varDelta )$ rounds to solve the CCDS problem, regardless of message size. We conclude by discussing how to apply our algorithm in the setting where the topology of reliable and unreliable links can change over time. 相似文献
3.
We consider multichannel systems and open queueing networks with unreliable elements: nodes, paths between nodes, and channels at nodes. Computation of limiting distributions in a product form for these models is based on choosing recovery schemes for unreliable elements (independent recovery, recovery at a single site, recovering network scheme), routing algorithms, and service disciplines. Thus, by introducing a certain control, we constructively relate queueing theory with reliability theory. Results of the paper can be transferred to closed networks almost without changes. 相似文献
4.
This paper considers the problem of selecting the optimum capacities of the links in a computer communication network which employs unreliable links. Given the nodes, links, link probabilities, grade of service and cost functions of the network, the objective of this problem is to find the optimum link capacities that minimize the network design cost, subject to the constraint equation involving the grade of service. This is essentially a combinatorial optimization problem. A general methematical model for this problem is formulated and a set of feasible solutions is obtained using Lagrangean relaxation and subgradient optimization techniques. A simulation study has been performed to verify the model, and favourable results obtained for a variety of nontrivial networks. 相似文献
5.
Causal order states that for any process the order in which it is delivered messages cannot violate the happened-before relation of the corresponding sendings. Such a communication abstraction has been defined for reliable distributed systems in which data of application messages have unlimited time validity. In this paper we extend the notion of causal order to cope with unreliable communication networks in which messages have real-time delivery constraints. In particular, we assume that messages have a limited time validity, , after which their data can no longer be used by the application, and that some of them can be lost by the communication network. This new abstraction, called -causal order, requires to deliver as many messages as possible within their validity time in such a way that these deliveries respect causal order. Two efficient implementations are proposed in the case of one-to-one and broadcast communication. Examples of distributed multimedia real-time applications, in which scheduling messages deliveries respecting -causal order is a crucial point for the quality of the service, are given. 相似文献
6.
Weijie DONG;Shaoyuan LI;Xiang YIN 《中国科学:信息科学(英文版)》2025,(1):322-341
In this paper,we investigate a decentralized diagnosis problem of a discrete-evnt system(DES) subject to unreliable sensors,where the sensor observations of local diagnosers may be non-deterministic as a result of possible failures.Existing studies on decentralized robust diagnosis can only deal with different types of sensor failures separately,e.g.,all sensors suffer from the same type of sensor failures such as intermittent sensor failures or permanent sensor failures.However,since sensors of different local diagnosers may face different external environments and have different internal characteristics,sensors corresponding to different local diagnosers may also have their own fault features.In this paper,we propose a flexible framework of decentralized diagnosis for DES subject to unreliable sensors such that sensors of different local diagnosers are permitted to have different types of sensor failures.To this end,we extend the existing decentralized diagnosis framework to the case where there exist common sensors broadcasting their observations to all local diagnosers.We apply linear temporal logic(LTL) to constrain infinite behaviors of private sensors of local diagnosers and common sensors.Then,a new notion of φ-codiagnosability is proposed as the necessary and sufficient condition for the existence of a decentralized diagnoser that works correctly under sensors,satisfying LTL-based sensor constraints.Finally,we provide an effective approach to verify the φ-codiagnosability. 相似文献
7.
Chung-Ta King Thomas B. Gendreau Lionel M. Ni 《Journal of Parallel and Distributed Computing》1989,7(3)
Election in a computer network is an operation in which one process is selected from among a group of processes to perform a particular task. An election is characterized by (1) the capacities of the candidates, and (2) the agreement reached by all processes to elect the master. In this paper, we show that election is in fact a very general style of computation. Many problems in computer networks can be solved by means of election. We then examine the election problem in computer networks with broadcast support. Basic design issues of election algorithms are addressed, and a number of election algorithms are presented based on various environments. These algorithms allow all nonfaulty processes to elect one and only one process as the master, and, by changing the definition of the capacities, they can be applied to a variety of problems. 相似文献
8.
We consider Jackson networks with unreliable nodes, which randomly break down and are under repair for a random time. The network is described by a Markov process which encompasses the availability status and queue lengths vector. Ergodicity conditions for many related networks are available in the literature and can often be expressed as rate conditions. For (reliable) nodes in Jackson networks the overall arrival rate has to be strictly less than its service rate. If for some nodes this condition is violated, the network process is not ergodic. Nevertheless, it is known that in such a situation, especially in large networks, parts of the network (where the rate condition is fulfilled) in the long run stabilize. For standard Jackson networks without breakdown of nodes, the asymptotics of such stable subnetworks were derived by Goodman and Massey [J.B. Goodman, W.A. Massey, The non-ergodic Jackson network, Journal of Applied Probability 21 (1984) 860–869].In this paper, we obtain the asymptotics of Jackson networks with unreliable nodes and show that the state distribution of the stable subnetworks converges to a Jackson-type product form distribution. In such networks with breakdown and repair of nodes, in general, the ergodicity condition is more involved.Because no stationary distribution for the network exists, steady-state availability and performance evaluation is not possible. We show that instead assessment of the quality of service in the long run for the stabilizing subnetwork can be done by using limiting distributions. Additionally, we prove that time averages of cumulative rewards can be approximated by state-space averages. 相似文献
9.
10.
Wireless sensor networks are inherently plagued by problems of node failure, interference to communications from environmental noise and energy-limited sensor motes. These problems pose conflicting issues in the design of suitable routing protocols. Several existing reliable routing protocols exploit message broadcast redundancy and hop count as routing metrics and their performance trade-offs are revealed during simulation. In this paper, we study and analyse related design issues in proposed efficient and reliable routing protocols that attempt to achieve reliable and efficient communication performance in both single- and multi-hub sensor networks. Simulation results of four such routing protocols show that routing performance depends more on optimal (near-optimal) routing in single hub than in multi-hub networks. Our work also shows that optimal (near-optimal) routing is better achieved when historical metrics like packet distance traversed and transmission success are also considered in the routing protocol design. 相似文献
11.
We study the question of routing for minimum average drop rate over unreliable servers that are susceptible to random buffer failures, which arises in unreliable data or manufacturing networks. Interestingly, we first reveal that the traditional Join-the-Shortest-Queue (JSQ) or optimal Randomized Splitting (RS) strategies are consistently outperformed by the Constant Splitting Rule (CSR) where the incoming traffic is split with a constant fraction towards the available servers.This finding motivates us to obtain the optimal splitting fraction under CSR. However, the objective function to be minimized depends on the mean queue length of the servers, whose closed-form expression is not available and often intractable for general arrival and service processes. Thus, we use non-derivative methods to solve this optimization problem by approximately evaluating the objective value at each iteration. To that end, we explicitly characterize the approximation error by utilizing the regenerating nature of unreliable buffers. By adaptively controlling the precision of this approximation, we show that our proposed algorithm converges to an optimal splitting decision in the almost sure sense. Yet, previous works on non-derivative methods assume continuous differentiability of the objective function, which is not the case in our setup. We relax this strong assumption to the case when the objective function is locally Lipschitz continuous, which is another contribution of this paper. 相似文献
12.
In this paper we introduce a novel energy-aware routing protocol REPU (reliable, efficient with path update), which provides reliability and energy efficiency in data delivery. REPU utilizes the residual energy available in the nodes and the received signal strength of the nodes to identify the best possible route to the destination. Reliability is achieved by selecting a number of intermediate nodes as waypoints and the route is divided into smaller segments by the waypoints. One distinct advantage of this model is that when a node on the route moves out or fails, instead of discarding the whole original route, only the two waypoint nodes of the broken segment are used to find a new path. REPU outperforms traditional schemes by establishing an energy-efficient path and also takes care of efficient route maintenance. Simulation results show that this routing scheme achieves much higher performance than the classical routing protocols, even in the presence of high node density, and overcomes simultaneous packet forwarding. 相似文献
13.
The problem of finding parameters of asymptotic relations for the probability of operation of a graph with unreliable ribs is solved. These parameters are expressed through minimax functions of lengths of ribs. To compute them, an economical modification of the Floyd algorithm is constructed. 相似文献
14.
Kim Kapsu Hong Myunghui Chung Kyungyong Oh SangYeob 《Peer-to-Peer Networking and Applications》2015,8(4):610-619
Peer-to-Peer Networking and Applications - It is a very difficult task to estimate abnormal objects and analyze reliability in peer-to-peer (P2P) networks. In the P2P network environment,... 相似文献
15.
16.
An evidential approach for detection of abnormal behaviour in the presence of unreliable sensors 总被引:1,自引:0,他引:1
Bruno Marhic Laurent DelahocheClément Solau Anne Marie Jolly-DesodtVincent Ricquebourg 《Information Fusion》2012,13(2):146-160
We address the problem of abnormal behaviour recognition of the inhabitant of a smart home in the presence of unreliable sensors. The corner stone of this work is a two-level architecture sensor fusion based on the Transferable Belief Model (TBM). The novelty of our work lies in the way we detect both unreliable sensors and abnormal behaviour within our architecture by using a temporal analysis of conflict resulting from the fusion of sensors. Detection of abnormal behaviour is based on a prediction/observation process and the influence of the faulty sources is discarded by discounting coefficients. Our architecture is tested in a real-life setting using three heterogeneous sensors enabling the detection of impossible transitions between three possible postures: Sitting, Standing and Lying. The impact of having a faulty sensor management is also tested in the real-life experiment for posture detection. 相似文献
17.
Sparse wireless sensor networks (WSNs) are emerging as an effective solution for a wide range of applications, especially for environmental monitoring. In many scenarios, a moderate number of sparsely deployed nodes can be sufficient to get the required information about the sensed phenomenon. To this end, special mobile elements, i.e. mobile data collectors (MDCs), can be used to get data sampled by sensor nodes. In this paper we present an analytical evaluation of the data collection performance in sparse WSNs with MDCs. Our main contribution is the definition of a flexible model which can derive the total energy consumption for each message correctly transferred by sensors to the MDC. The obtained energy expenditure for data transfer also accounts for the overhead due to the MDC detection when sensor nodes operate with a low duty cycle. The results show that a low duty cycle is convenient and allows a significant amount of correctly received messages, especially when the MDC moves with a low speed. When the MDC moves fast, depending on its mobility pattern, a low duty cycle may not always be the most energy efficient option. 相似文献
18.
I. S. Mikadze 《Cybernetics and Systems Analysis》1989,25(3):409-419
We consider a queueing system consisting of unreliable processors with hardware, information, and time redundancy. Analytical expressions are derived for the virtual waiting time and flow time of jobs in the system, under both nonstationary and stationary conditions.Translated from Kibernetika, No. 3, pp. 102–109, 128, May–June, 1989. 相似文献
19.
20.
Dana Angluin James Aspnes Zoë Diamadi Michael J. Fischer René Peralta 《Distributed Computing》2006,18(4):235-253
The computational power of networks of small resource-limited mobile agents is explored. Two new models of computation based
on pairwise interactions of finite-state agents in populations of finite but unbounded size are defined. With a fairness condition
on interactions, the concept of stable computation of a function or predicate is defined. Protocols are given that stably
compute any predicate in the class definable by formulas of Presburger arithmetic, which includes Boolean combinations of
threshold-k, majority, and equivalence modulo m. All stably computable predicates are shown to be in NL. Assuming uniform random sampling of interacting pairs yields the
model of conjugating automata. Any counter machine with O(1) counters of capacity O(n) can be simulated with high probability by a conjugating automaton in a population of size n. All predicates computable with high probability in this model are shown to be in P; they can also be computed by a randomized logspace machine in exponential time. Several open problems and promising future
directions are discussed.
Supported in part by NSF grants CCR-9820888, CCR-0098078, and CNS-0305258 (Aspnes), by ONR grant N00014-01-1-0795 (Diamadi),
and by NSF grant CSE-0081823 (Fischer and Peralta).
A preliminary version of this paper appeared in the proceedings of the 23rd ACM SIGACT-SIGOPS Symposium on Principles of Distributed
Computing, St. John’s, Newfoundland, Canada, July 2004. 相似文献