首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Gossip (or epidemic) algorithms have recently become popular solutions to multicast message dissemination in peer-to-peer systems. Nevertheless, it is not straightforward to apply gossip to on-demand streaming because it often fails to achieve a timely delivery. To solve this problem and taking into account the characteristic of peers randomly joining and leaving in peer-to-peer systems, an Efficient Membership Management Protocol (EMMP) has been presented. Every node only needs to keep contact with O (log(N)) nodes, and EMMP can support the reliable dissemination of messages. Considering the “distance” between peers, it causes the major data to be transmitted in a local area and reduces the backbone’s traffic, and speeds up the dissemination of messages between peers. This paper has adopted the “goodfriend” mechanism to reduce the influence on the system when a peer fails or leaves. Simulation results show that EMMP is highly efficient, and both the redundancy and the delay of the system are well solved.  相似文献   

2.
Conventional client-server applications can be enhanced by enabling peer-to-peer data sharing between the clients, greatly reducing the scalability concern when a large number of clients access a single server. However, for these “hybrid peer-to-peer applications,” obtaining data from peer clients may not be secure, and clients may lack incentives in providing or receiving data from their peers. In this paper, we describe our mSSL framework that encompasses key security and incentive functions that hybrid peer-to-peer applications can selectively invoke based on their need. In contrast to the conventional SSL protocol that only protects client-server connections, mSSL not only supports client authentication and data confidentiality, but also ensures data integrity through a novel exploit of Merkle hash trees, all under the assumption that data sharing can be between untrustworthy clients. Moreover, with mSSL’s incentive functions, any client that provides data to its peers can also obtain accurate proofs or digital money for its service securely and reliably. Our evaluation further shows that mSSL is not only fast and effective, but also has a reasonable overhead.  相似文献   

3.
A Grid information system should rely upon two basic features: the replication and dissemination of information about Grid services and resources, and the distribution of such information among Grid hosts. This paper examines an approach based on ant systems to replicate and map Grid services information on Grid hosts according to the semantic classification of such services. The Ant-based Replication and MApping Protocol (ARMAP) is used to disseminate resource information by a decentralized mechanism, and its effectiveness is evaluated by means of an entropy index. Information is disseminated by agents – ants – that traverse the Grid by exploiting P2P interconnections among Grid hosts. A mechanism inspired by real ants’ pheromone is used by each agent to autonomously drive its behavior on the basis of its interaction with the environment. “Swarm Intelligence” emerges from the activity of a high number of ants. The ARMAP protocol enables the use of a semi-informed search algorithm which can drive query messages towards a cluster of peers having information about resources belonging to the requested class. A simulation analysis has been performed to evaluate the performance of ARMAP.  相似文献   

4.
In this article, we address the problem of counting the number of peers in a peer-to-peer system. This functionality has proven useful in the design of several peer-to-peer applications. However, it is delicate to achieve when nodes are organised in an overlay network, and each node has only a limited, local knowledge of the whole system. In this paper, we propose a generic technique, called the Sample&Collide method, to solve this problem. It relies on a sampling sub-routine which returns randomly chosen peers. Such a sampling sub-routine is of independent interest. It can be used for instance for neighbour selection by new nodes joining the system. We use a continuous time random walk to obtain such samples. The core of the method consists in gathering random samples until a target number of redundant samples are obtained. This method is inspired by the “birthday paradox” technique of Bawa et al. (Estimating aggregates on a peer-to-peer network, Technical Report, Department of Computer Science, Stanford University), upon which it improves by achieving a target variance with fewer samples. We analyse the complexity and accuracy of the proposed method. We illustrate in particular how expansion properties of the overlay affect its performance. We use simulations to evaluate its performance in both static and dynamic environments with sudden changes in peer populations, and verify that it tracks varying system sizes accurately.  相似文献   

5.
Process mining includes the automated discovery of processes from event logs. Based on observed events (e.g., activities being executed or messages being exchanged) a process model is constructed. One of the essential problems in process mining is that one cannot assume to have seen all possible behavior. At best, one has seen a representative subset. Therefore, classical synthesis techniques are not suitable as they aim at finding a model that is able to exactly reproduce the log. Existing process mining techniques try to avoid such “overfitting” by generalizing the model to allow for more behavior. This generalization is often driven by the representation language and very crude assumptions about completeness. As a result, parts of the model are “overfitting” (allow only for what has actually been observed) while other parts may be “underfitting” (allow for much more behavior without strong support for it). None of the existing techniques enables the user to control the balance between “overfitting” and “underfitting”. To address this, we propose a two-step approach. First, using a configurable approach, a transition system is constructed. Then, using the “theory of regions”, the model is synthesized. The approach has been implemented in the context of ProM and overcomes many of the limitations of traditional approaches.  相似文献   

6.
A highly practical parallel signcryption scheme named PLSC from trapdoor per- mutations (TDPs for short) was built to perform long messages directly. The new scheme follows the idea “scramble all, and encrypt small”, using some scrambling operation on message m along with the user’s identities, and then passing, in par- allel, small parts of the scrambling result through corresponding TDPs. This design enables the scheme to flexibly perform long messages of arbitrary length while avoid repeatedly invoking TDP operations such as the CBC mode, or verbosely black-box composing symmetric encryption and signcryption, resulting in notice- able practical savings in both message bandwidth and efficiency. Concretely, the signcryption scheme requires exactly one computation of the “receiver’s TDP” (for “encryption”) and one inverse computation of the “sender’s TDP” (for “authentica- tion”), which is of great practical significance in directly performing long messages, since the major bottleneck for many public encryption schemes is the excessive computational overhead of performing TDP operations. Cutting out the verbosely repeated padding, the newly proposed scheme is more efficient than a black-box hybrid scheme. Most importantly, the proposed scheme has been proven to be tightly semantically secure under adaptive chosen ciphertext attacks (IND-CCA2) and to provide integrity of ciphertext (INT-CTXT) as well as non-repudiation in the random oracle model. All of these security guarantees are provided in the full multi-user, insider-security setting. Moreover, though the scheme is designed to perform long messages, it may also be appropriate for settings where it is imprac- tical to perform large block of messages (i.e. extremely low memory environments such as smart cards).  相似文献   

7.
We present Juxtaposed approximate PageRank (JXP), a distributed algorithm for computing PageRank-style authority scores of Web pages on a peer-to-peer (P2P) network. Unlike previous algorithms, JXP allows peers to have overlapping content and requires no a priori knowledge of other peers’ content. Our algorithm combines locally computed authority scores with information obtained from other peers by means of random meetings among the peers in the network. This computation is based on a Markov-chain state-lumping technique, and iteratively approximates global authority scores. The algorithm scales with the number of peers in the network and we show that the JXP scores converge to the true PageRank scores that one would obtain with a centralized algorithm. Finally, we show how to deal with misbehaving peers by extending JXP with a reputation model. Partially supported by the EU within the 6th Framework Programme under contract 001907 “Dynamically Evolving, Large Scale Information Systems” (DELIS).  相似文献   

8.
Handling message semantics with Generic Broadcast protocols   总被引:1,自引:0,他引:1  
Summary. Message ordering is a fundamental abstraction in distributed systems. However, ordering guarantees are usually purely “syntactic,” that is, message “semantics” is not taken into consideration despite the fact that in several cases semantic information about messages could be exploited to avoid ordering messages unnecessarily. In this paper we define the Generic Broadcast problem, which orders messages only if needed, based on the semantics of the messages. The semantic information about messages is introduced by conflict relations. We show that Reliable Broadcast and Atomic Broadcast are special instances of Generic Broadcast. The paper also presents two algorithms that solve Generic Broadcast. Received: August 2000 / Accepted: August 2001  相似文献   

9.
Erik Hollnagel’s body of work in the past three decades has molded much of the current research approach to system safety, particularly notions of “error”. Hollnagel regards “error” as a dead-end and avoids using the term. This position is consistent with Rasmussen’s claim that there is no scientifically stable category of human performance that can be described as “error”. While this systems view is undoubtedly correct, “error” persists. Organizations, especially formal business, political, and regulatory structures, use “error” as if it were a stable category of human performance. They apply the term to performances associated with undesired outcomes, tabulate occurrences of “error”, and justify control and sanctions through “error”. Although a compelling argument can be made for Hollnagel’s view, it is clear that notions of “error” are socially and organizationally productive. The persistence of “error” in management and regulatory circles reflects its value as a means for social control.  相似文献   

10.
Requirement emergence computation of networked software   总被引:3,自引:0,他引:3  
Emergence Computation has become a hot topic in the research of complex systems in recent years. With the substantial increase in scale and complexity of network-based information systems, the uncertain user requirements from the Internet and personalized application requirement result in the frequent change for the software requirement. Meanwhile, the software system with non self-possessed resource become more and more complex. Furthermore, the interaction and cooperation requirement between software units and running environment in service computing increase the complexity of software systems. The software systems with complex system characteristics are developing into the “Networked Software” with characteristics of change-on-demand and change-with-cooperation. The concepts “programming”, “compiling” and “running” of software in common sense are extended from “desktop” to “network”. The core issue of software engineering is moving to the requirement engineering, which becomes the research focus of complex system software engineering. In this paper, we present the software network view based on complex system theory, and the concept of networked software and networked requirement. We propose the challenge problem in the research of emergence computation of networked software requirement. A hierarchical & cooperative unified requirement modeling framework URF (Unified Requirement Framework) and related RGPS (Role, Goal, Process and Service) meta-models are proposed. Five scales and the evolutionary growth mechanism in requirement emergence computation of networked software are given with focus on user-dominant and domain-oriented requirement, and the rules and predictability in requirement emergence computation are analyzed. A case study in the application of networked e-Business with evolutionary growth based on State design pattern is presented in the end.  相似文献   

11.
Model-checking is becoming an accepted technique for debugging hardware and software systems. Debugging is based on the “Check/Analyze/Fix” loop: check the system against a desired property, producing a counterexample when the property fails to hold; analyze the generated counterexample to locate the source of the error; fix the flawed artifact—the property or the model. The success of model-checking non-trivial systems critically depends on making this Check/Analyze/Fix loop as tight as possible. In this paper, we concentrate on the Analyze part of the debugging loop. To this end, we present a framework for generating, structuring and exploring counterexamples, implemented in a tool called KEGVis. The framework is based on the idea that the most general type of evidence to why a property holds or fails to hold is a proof. Such proofs can be presented to the user in the form of proof-like counterexamples, without sacrificing any of the intuitiveness and close relation to the model that users have learned to expect from model-checkers. Moreover, proof generation is flexible, and can be controlled by strategies, whether built into the tool or specified by the user, thus enabling generation of the most “interesting” counterexample and its interactive exploration. Moreover, proofs can be used to generate and display all relevant evidence together, a technique referred to as abstract counterexamples. Overall, our framework can be used for explaining the reason why the property failed or succeeded, determining whether the property was correct (“specification debugging”), and for general model exploration.  相似文献   

12.
In recent years, on-demand transport systems (such as a demand-bus system) are focused as a new transport service in Japan. An on-demand vehicle visits pick-up and delivery points by door-to-door according to the occurrences of requests. This service can be regarded as a cooperative (or competitive) profit problem among transport vehicles. Thus, a decision-making for the problem is an important factor for the profits of vehicles (i.e., drivers). However, it is difficult to find an optimal solution of the problem, because there are some uncertain risks, e.g., the occurrence probability of requests and the selfishness of other rival vehicles. Therefore, this paper proposes a transport policy for on-demand vehicles to control the uncertain risks. First, we classify the profit of vehicles as “assured profit” and “potential profit”. Second, we propose a “profit policy” and “selection policy” based on the classification of the profits. Moreover, the selection policy can be classified into “greed”, “mixed”, “competitive”, and “cooperative”. These selection policies are represented by selection probabilities of the next visit points to cooperate or compete with other vehicles. Finally, we report simulation results and analyze the effectiveness of our proposal policies.  相似文献   

13.
Service management and design has largely focused on the interactions between employees and customers. This perspective holds that the quality of the “service experience” is primarily determined during this final “service encounter” that takes place in the “front stage.” This emphasis discounts the contribution of the activities in the “back stage” of the service value chain where materials or information needed by the front stage are processed. However, the vast increase in web-driven consumer self-service applications and other automated services requires new thinking about service design and service quality. It is essential to consider the entire network of services that comprise the back and front stages as complementary parts of a “service system.” We need new concepts and methods in service design that recognize how back stage information and processes can improve the front stage experience. This paper envisions a methodology for designing service systems that synthesizes (front-stage-oriented) user-centered design techniques with (back stage) methods for designing information-intensive applications.  相似文献   

14.
This paper argues that the time is now right to field practical Spoken Language Translation (SLT) systems. Several sorts of practical systems can be built over the next few years if system builders recognize that, at the present state of the art, users must cooperate and compromise with the programs. Further, SLT systems can be arranged on a scale, in terms of the degree of cooperation or compromise they require from users. In general, the broader the intended linguistic or topical coverage of a system, the more user cooperation or compromise it will presently require. The paper briefly discusses the component technologies of SLT systems as they relate to user cooperation and accommodation (“human factors engineering”), with examples from the authors’ work. It describes three classes of “cooperative” SLT systems which could be put into practical use during the next few years.All trademarks are hereby acknowledged. All URLs last accessed between 6th and 25th January 2006.  相似文献   

15.
The Atomic Broadcast algorithm described in this paper can deliver messages in two communication steps, even if multiple processes broadcast at the same time. It tags all broadcast messages with the local real time, and delivers all messages in the order of these timestamps. Both positive and negative statements are used: “m broadcast at time 51” vs. “no messages broadcast between times 31 and 51”. To prevent crashed processes from blocking the system, the -elected leader broadcasts negative statements on behalf of the processes it suspects () to have crashed. A new cheap Generic Broadcast algorithm is used to ensure consistency between conflicting statements. It requires only a majority of correct processes (n > 2f) and, in failure-free runs, delivers all non-conflicting messages in two steps. The main algorithm satisfies several new lower bounds, which are proved in this paper.  相似文献   

16.
17.
In large-scale peer-to-peer (P2P) video-on-demand (VoD) streaming applications, a fundamental challenge is to quickly locate new supplying peers whenever a VCR command is issued, in order to achieve smooth viewing experiences. For many existing commercial systems which use tracker servers for neighbor discovery, the increasing scale of P2P VoD systems has overloaded the dedicated servers to the point where they cannot accurately identify the suppliers with the desired content and bandwidth. To avoid overloading the servers and achieve instant neighbor discovery over the self-organizing P2P overlay, we design a novel method of organizing peers watching a video. The method features a light-weight indexing architecture to support efficient streaming and fast neighbor discovery at the same time. InstantLeap separates the neighbors at each peer into a streaming neighbor list and a shortcut neighbor list, for streaming and neighbor discovery respectively, which are maintained loosely but effectively based on random neighbor list exchanges. Our analysis shows that InstantLeap achieves an O(1) neighbor discovery efficiency upon any playback “leap” across the media stream in streaming overlays of any size, and low messaging costs for overlay maintenance upon peer join, departure, and VCR operations. We also verify our design with large-scale simulation studies of dynamic P2P VoD systems based on real-world settings.  相似文献   

18.
Ensuring causal consistency in a Distributed Shared Memory (DSM) means all operations executed at each process will be compliant to a causality order relation. This paper first introduces an optimality criterion for a protocol P, based on a complete replication of variables at each process and propagation of write updates, that enforces causal consistency. This criterion measures the capability of a protocol to update the local copy as soon as possible while respecting causal consistency. Then we present an optimal protocol built on top of a reliable broadcast communication primitive and we show how previous protocols based on complete replication presented in the literature are not optimal. Interestingly, we prove that the optimal protocol embeds a system of vector clocks which captures the read/write semantics of a causal memory. From an operational point of view, an optimal protocol strongly reduces its message buffer overhead. Simulation studies show that the optimal protocol roughly buffers a number of messages of one order of magnitude lower than non-optimal ones based on the same communication primitive. R. Baldoni Roberto Baldoni is a Professor of Distributed Systems at the University of Rome “La Sapienza”. He published more than one hundred papers (from theory to practice) in the fields of distributed and mobile computing, middleware platforms and information systems. He is the founder of MIDdleware LABoratory <://www.dis.uniroma1.it/&dollar;∼midlab> textgreater (MIDLAB) whose members participate to national and european research projects. He regularly serves as an expert for the EU commission in the evaluation of EU projects. Roberto Baldoni chaired the program committee of the “distributed algorithms” track of the 19th IEEE International Conference on Distributed Computing Systems (ICDCS-99) and /he was PC Co-chair of the ACM International Workshop on Principles of Mobile Computing/ (POMC). He has been also involved in the organizing and program committee of many premiership international conferences and workshops. A. Milani Alessia Milani is currently involved in a joint research doctoral thesis between the Department of Computer and Systems Science of the University of Rome “La Sapienza” and the University of Rennes I, IRISA.She earned a Laurea degree in Computer Engineering at University of Rome “La Sapienza” on May 2003. Her research activity involves the area of distributed systems. Her current research interests include communication paradigms, in particular distributed shared memories, replication and consistency criterions. S. Tucci Piergiovanni Sara Tucci Piergiovanni is currently a Ph.D. Student at the Department of Computer and Systems Science of the University of Rome “La Sapienza”.She earned a Laurea degree in Computer Engineering at University of Rome “La Sapienza” on March 2002 with marks 108/110. Her laurea thesis has been awarded the italian national “Federcommin-AICA” prize 2002 for best laurea thesis in Information Technology. Her research activity involves the area of distributed systems. Early works involved the issue of fault-tolerance in asynchronous systems and software replication. Currently, her main focus is on communication paradigms that provide an “anonymous” communication as publish/subscribe and distributed shared memories. The core contributions are several papers published in international conferences and journals.  相似文献   

19.
资源检索是P2P系统研究的热点之一,非结构化P2P资源查找普遍采用泛洪机制。随着查询请求的增加,消息数量呈指数增长,网络拥塞和带宽浪费严重,查询效率得不到保障。针对这一问题,给出了一种基于本地聚类的非结构化P2P资源查找算法。通过对资源特征向量的本地Kmeans聚类和相似链接的建立,有效地提高了资源检索效率,避免了查询消息的扩散对网络带宽的浪费。实验表明,该方法能有效缩短资源的平均检索长度,提高查找成功率。  相似文献   

20.
We describe a mechanism called SpaceGlue for adaptively locating services based on the preferences and locations of users in a distributed and dynamic network environment. In SpaceGlue, services are bound to physical locations, and a mobile user accesses local services depending on the current space he/she is visiting. SpaceGlue dynamically identifies the relationships between different spaces and links or “glues” spaces together depending on how previous users moved among them and used those services. Once spaces have been glued, users receive a recommendation of remote services (i.e., services provided in a remote space) reflecting the preferences of the crowd of users visiting the area. The strengths of bonds are implicitly evaluated by users and adjusted by the system on the basis of their evaluation. SpaceGlue is an alternative to existing schemes such as data mining and recommendation systems and it is suitable for distributed and dynamic environments. The bonding algorithm for SpaceGlue incrementally computes the relationships or “bonds” between different spaces in a distributed way. We implemented SpaceGlue using a distributed network application platform Ja-Net and evaluated it by simulation to show that it adaptively locates services reflecting trends in user preferences. By using “Mutual Information (MI)” and “F-measure” as measures to indicate the level of such trends and the accuracy of service recommendation, the simulation results showed that (1) in SpaceGlue, the F-measure increases depending on the level of MI (i.e., the more significant the trends, the greater the F-measure values), (2) SpaceGlue achives better precision and F-measure than “Flooding case (i.e., every service information is broadcast to everybody)” and “No glue case” by narrowing appropriate partners to send recommendations based on bonds, and (3) SpaceGlue achieves better F-measure with large number of spaces and users than other cases (i.e., “flooding” and “no glue”). Tomoko Itao is an alumna of NTT Network Innovation Laboratories  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号