首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
The future Internet will support pervasive applications and communications models that require end-nodes cooperation, such as fog computing and machine-to-machine communications. Among the many applications, also video streaming can be provided with a cooperative and peer-to-peer approach. Cooperative distribution requires building a distribution overlay on top of the physical topology (the underlay). This work proposes an optimized, cross-layer approach to build this overlay minimizing the impact on the underlay. We design an optimal strategy, which is proven to be NP-complete, and thus not solvable with a distributed, lightweight protocol. The optimal strategy is relaxed exploiting the knowledge on the betweenness centrality of the nodes in the underlay topology, obtaining two easily implementable solutions applicable to any link-state protocol for distributed wireless mesh networks. The additional introduction of heuristic improvements further optimizes the performance in real network scenarios. Extensive simulation results support the theoretical findings using three different network topologies. They show that the relaxed implementations are reasonably close to the optimal solution, and provide vast gains compared to the traditional overlay topologies that peer-to-peer applications build.  相似文献   

2.
Using a peer-to-peer approach for live multimedia streaming applications offers the promise to obtain a highly scalable, decentralized, and robust distribution service. When constructing streaming topologies, however, specific care has to be taken in order to ensure that quality of service requirements in terms of delay, jitter, packet loss, and stability against deliberate denial of service attacks are met. In this paper, we concentrate on the latter requirement of stability against denial-of-service attacks. We present an analytical model to assess the stability of overlay streaming topologies and describe attack strategies. Building on this, we describe topologies, which are optimally stable toward perfect attacks based on global knowledge, and give a mathematical proof of their optimality. The formal construction and analysis of these topologies using global knowledge lead us to strategies for distributed procedures, which are able to construct resilient topologies in scenarios, where global knowledge can not be gathered. Experimental results show that the topologies created in such a real-world scenario are close to optimally stable toward perfect denial of service attacks.  相似文献   

3.
The “small-world” graph structure is pervasive and is observed to arise “without-design” or “naturally” in many practical systems such as the World Wide Web. In contrast to natural systems, overlay networks provide an opportunity to design structure. We seek the advantages of designing overlay topologies with small-world properties to support file sharing in peer-to-peer networks. We focus on two metrics of performance: (a) search protocol performance, a local gain perceived directly by peer-to-peer network users and (b) network utilization, a global property that is of interest to network service providers. We propose a class of overlay topologies and show, by simulation, that a particular topology instance of this class where every node has many close neighbors and few random neighbors (i.e., a small-world graph) exhibits very good properties. In this overlay topology, the chances of locating files are high, and the nodes where these files are found are, on average, close to the query source. This improvement in search protocol performance is achieved while decreasing the traffic load on the links in the underlying network. We propose a simple greedy algorithm to construct such overlay topologies where each node operates independently and in a decentralized manner to select its neighbors.  相似文献   

4.
The problem of efficient resource location is an important open issue in P2P systems. This paper introduces DANTE, a self-adapting P2P system that changes its peer links to form topologies where resources are located in an efficient manner via random walks. Additionally, this same self-adaptation capacity makes DANTE capable of reacting to events like changes in the system load or attacks on well-connected nodes by adjusting the topology to the new scenario. This adaptive behavior emerges as the global result of the individual work of nodes, without the intervention of any central entity or the need for global knowledge. Simulations show that this adaptation process makes the system scalable, resilient to attacks, and tolerant to a high transitivity of peers. Simulations are also used to compare this solution with other well-known self-adapting P2P system. From these results it can be concluded that the topologies achieved by DANTE offer better performance.  相似文献   

5.
区块链是一种对等网络的分布式账本系统,具备去中心化、不可篡改、安全可信等特点,因此受到了广泛关注。在区块链系统中,典型的拜占庭错误包括操作错误、网络延迟、系统崩溃、恶意攻击等。现有共识算法不仅对区块链中拜占庭节点的容错能力低,而且对区块链系统的可扩展性差。针对这一问题,文中提出了基于Gossip协议的拜占庭共识算法,使系统可以容忍小于一半的节点为拜占庭节点,能够达到XFT共识算法的容错能力。同时,因为采用了统一的数据结构,所以系统具有更好的可扩展性,并且有利于正确节点识别区块链系统中的恶意节点。在该算法中,提案节点随着区块链长度的变化而转移,系统中所有节点都处于对等的地位,从而避免了单点故障问题,进而使得系统具有更好的动态负载均衡的性能。  相似文献   

6.
Connectivity-based node clustering has wide-ranging applications in decentralized peer-to-peer (P2P) networks such as P2P file sharing systems, mobile ad-hoc networks, P2P sensor networks, and so forth. This paper describes a connectivity-based distributed node clustering scheme (CDC). This scheme presents a scalable and efficient solution for discovering connectivity-based clusters in peer networks. In contrast to centralized graph clustering algorithms, the CDC scheme is completely decentralized and it only assumes the knowledge of neighbor nodes instead of requiring a global knowledge of the network (graph) to be available. An important feature of the CDC scheme is its ability to cluster the entire network automatically or to discover clusters around a given set of nodes. To cope with the typical dynamics of P2P networks, we provide mechanisms to allow new nodes to be incorporated into appropriate existing clusters and to gracefully handle the departure of nodes in the clusters. These mechanisms enable the CDC scheme to be extensible and adaptable in the sense that the clustering structure of the network adjusts automatically as nodes join or leave the system. We provide detailed experimental evaluations of the CDC scheme, addressing its effectiveness in discovering good quality clusters and handling the node dynamics. We further study the types of topologies that can benefit best from the connectivity-based distributed clustering algorithms like CDC. Our experiments show that utilizing message-based connectivity structure can considerably reduce the messaging cost and provide better utilization of resources, which in turn improves the quality of service of the applications executing over decentralized peer-to-peer networks.  相似文献   

7.
Link rate allocation is very important for supporting high video playback rate in Peer-to-Peer video streaming. Although many studies can be found on resource allocation in P2P streaming in wired networks, very few studies have studied the problem in wireless networks, especially in Wireless multi-hop Mesh Networks (WMNs), which is still challenging. To maximize the users’ satisfaction of P2P streaming in WMNs, this paper focuses on link rate allocation problem and proposes a fully distributed algorithm to efficiently utilize the upload and download bandwidth of wireless mesh nodes. We first build an efficient P2P streaming system based on the experimental results from real deployment of our wireless mesh testbed. Then we design an efficient distributed algorithm based on the solution to a linear optimization model, which optimizes towards a user-density-related objective to decide the best streaming rates among peers. Our scheme is resilient to network dynamics that is characteristic in wireless multi-hop peer-to-peer networks. The simulation experiments demonstrate the significant performance enhancement by using the proposed rate allocation algorithm in WMNs.  相似文献   

8.
In this paper, an algorithm that forms a dynamic and self-organizing network is demonstrated. The hypothesis of this work is that in order to achieve a resilient and adaptive peer-to-peer (P2P) network, each network node must proactively maintain a minimum number of edges. Specifically, low-level communication protocols are not sufficient by themselves to achieve high-service availability, especially in the case of ad hoc or dynamic networks with a high degree of node addition and deletion. The concept has been evaluated within a P2P agent application in which each agent has a goal to maintain a preferred number of connections to a number of service providing agents. Using this algorithm, the agents update a weight value associated with each connection, based on the perceived utility of the connection to the corresponding agent. This utility function can be a combination of several node or edge parameters, such as degree k of the target node, or frequency of the message response from the node. This weight is updated using a set of Hebbian-style learning rules, such that the network as a whole exhibits adaptive self-organizing behavior. The principal result is the finding that by limiting the connection neighborhood within the overlay topology, the resulting P2P network can be made highly resilient to targeted attacks on high-degree nodes, while maintaining search efficiency.  相似文献   

9.
In this work we propose a fine grained approach with self-adaptive migration rate for distributed evolutionary computation. Our target is to gain some insights on the effects caused by communication when the algorithm scales. To this end, we consider a set of basic topologies in order to avoid the overlapping of algorithmic effects between communication and topological structures. We analyse the approach viability by comparing how solution quality and algorithm speed change when the number of processors increases and compare it with an Island model based implementation. A finer-grained approach implies a better chance of achieving a larger scalable system; such a feature is crucial concerning large-scale parallel architectures such as peer-to-peer systems. In order to check scalability, we perform a threefold experimental evaluation of this model: first, we concentrate on the algorithmic results when the problem scales up to eight nodes in comparison with how it does following the Island model. Second, we analyse the computing time speedup of the approach while scaling. Finally, we analyse the network performance with the proposed self-adaptive migration rate policy that depends on the link latency and bandwidth. With this experimental setup, our approach shows better scalability than the Island model and a equivalent robustness on the average of the three test functions under study.  相似文献   

10.
Topological self-stabilization is an important concept to build robust open distributed systems (such as peer-to-peer systems) where nodes can organize themselves into meaningful network topologies. The goal is to devise distributed algorithms where nodes forward, insert, and delete links to neighboring nodes, and that converge quickly to such a desirable topology, independently of the initial network configuration. This article proposes a new model to study the parallel convergence time. Our model sheds light on the achievable parallelism by avoiding bottlenecks of existing models that can yield a distorted picture. As a case study, we consider local graph linearization—i.e., how to build a sorted list of the nodes of a connected graph in a distributed and self-stabilizing manner. In order to study the main structure and properties of our model, we propose two variants of a most simple local linearization algorithm. For each of these variants, we present analyses of the worst-case and best-case parallel time complexities, as well as the performance under a greedy selection of the actions to be executed. It turns out that the analysis is non-trivial despite the simple setting, and to complement our formal insights we report on our experiments which indicate that the runtimes may be better in the average case.  相似文献   

11.
In recent years, the number of peer-to-peer (P2P) applications has increased significantly. One important problem in many P2P applications is how to efficiently disseminate data from a single source to multiple receivers on the Internet. A successful model used for analyzing this problem is a graph consisting of nodes and edges, with a capacity assigned to each edge. In some situations however, it is inconvenient to use this model. To that end, we propose to study the problem of efficient data dissemination in a source constraint network. A source constraint network is modeled as a graph in which, the capacity is associated with a node, rather than an edge. The contributions of this paper include (a) a quantitative data dissemination in any source constraint network, (b) a set of topologies suitable for data dissemination in P2P networks, and (c) an architecture and implementation of a P2P system based on the proposed optimal topologies. We will present the experimental results of our P2P system deployed on PlanetLab nodes demonstrating that our approach achieves near optimal throughput while providing scalability, low delay and bandwidth fairness among peers.  相似文献   

12.
Serverless distributed computing has received significant attention from both the industry and the research community. Among the most popular applications are the wide area network file systems, exemplified by CFS, Farsite and OceanStore. These file systems store files on a large collection of untrusted nodes that form an overlay network. They use cryptographic techniques to maintain file confidentiality and integrity from malicious nodes. Unfortunately, cryptographic techniques cannot protect a file holder from a Denial-of-Service (DoS) or a host compromise attack. Hence, most of these distributed file systems are vulnerable to targeted file attacks, wherein an adversary attempts to attack a small (chosen) set of files by attacking the nodes that host them. This paper presents LocationGuard - a location hiding technique for securing overlay file storage systems from targeted file attacks. LocationGuard has three essential components: (i) location key, (ii) routing guard, a secure algorithm that protects accesses to a file in the overlay network given its location key, and (iii) a set of location inference guards. Our experimental results quantify the overhead of employing LocationGuard and demonstrate its effectiveness against DoS attacks, host compromise attacks and various location inference attacks.  相似文献   

13.
Increasingly more digital communication is routed among wireless, mobile computers over ad-hoc, unsecured communication channels. In this paper, we design two stochastic search algorithms (a greedy heuristic, and an evolutionary algorithm) which automatically search for strong insider attack methods against a given ad-hoc, delay-tolerant communication protocol, and thus expose its weaknesses. To assess their performance, we apply the two algorithms to two simulated, large-scale mobile scenarios (of different route morphology) with 200 nodes having free range of movement. We investigate a choice of two standard attack strategies (dropping messages and flooding the network), and four delay-tolerant routing protocols: First Contact, Epidemic, Spray and Wait, and MaxProp. We find dramatic drops in performance: replicative protocols (Epidemic, Spray and Wait, MaxProp), formerly deemed resilient, are compromised to different degrees (delivery rates between 24% and 87%), while a forwarding protocol (First Contact) is shown to drop delivery rates to under 5% — in all cases by well-crafted attack strategies and with an attacker group of size less than 10% the total network size. Overall, we show that the two proposed methods combined constitute an effective means to discover (at design-time) and raise awareness about the weaknesses and strengths of existing ad-hoc, delay-tolerant communication protocols against potential malicious cyber-attacks.  相似文献   

14.
The rise of mobile computing platforms has given rise to a new class of applications: mobile applications that interact with peer applications running on neighbouring phones. Developing such applications is challenging because of problems inherent to concurrent and distributed programming, and because of problems inherent to mobile networks, such as the fact that wireless network connectivity is often intermittent, and the lack of centralized infrastructure to coordinate the peers.We present AmbientTalk, a distributed programming language designed specifically to develop mobile peer-to-peer applications. AmbientTalk aims to make it easy to develop mobile applications that are resilient to network failures by design. We describe the language׳s concurrency and distribution model in detail, as it lies at the heart of AmbientTalk׳s support for responsive, resilient application development. The model is based on communicating event loops, itself a descendant of the actor model. We contribute a small-step operational semantics for this model and use it to establish data race and deadlock freedom.  相似文献   

15.
Large-scale overlay networks have become crucial ingredients of fully-decentralized applications and peer-to-peer systems. Depending on the task at hand, overlay networks are organized into different topologies, such as rings, trees, semantic and geographic proximity networks. We argue that the central role overlay networks play in decentralized application development requires a more systematic study and effort towards understanding the possibilities and limits of overlay network construction in its generality. Our contribution in this paper is a gossip protocol called T-Man that can build a wide range of overlay networks from scratch, relying only on minimal assumptions. The protocol is fast, robust, and very simple. It is also highly configurable as the desired topology itself is a parameter in the form of a ranking method that orders nodes according to preference for a base node to select them as neighbors. The paper presents extensive empirical analysis of the protocol along with theoretical analysis of certain aspects of its behavior. We also describe a practical application of T-Man for building Chord distributed hash table overlays efficiently from scratch.  相似文献   

16.
《Computer Networks》2003,41(2):177-192
There is an increasing trend in the Internet that a set of replicated providers are qualified for a service or resource request from a client. In this case, it is advantageous to select the best provider considering some distance measures, such as hop count or path latency. In this paper, we present a group-based distance measurement service (GDMS), which estimates and disseminates distance information of node-pairs in large-scale wide area networks. GDMS is fully distributed and does not rely on any centralized servers; thus is particularly suitable for the rapidly popularized peer-to-peer applications. The key concept in GDMS is measurement groups (MGroups). Nodes are self-organized into MGroups to form a hierarchical structure. A set of algorithms are proposed to handle network dynamics and optimize the group organization to reduce system costs as well as improve estimation accuracy. Moreover, a novel multicast-based algorithm is used for both intra- and inter-group performance measurements. Performance evaluation over different network topologies shows that GDMS is scalable and provides effective distance information to upper-layer applications at a relatively low cost.  相似文献   

17.
Today’s peer-to-peer networks are designed based on the assumption that the participating nodes are cooperative, which does not hold in reality. Incentive mechanisms that promote cooperation must be introduced. However, the existing incentive schemes (using either reputation or virtual currency) suffer from various attacks based on false reports. Even worse, a colluding group of malicious nodes in a peer-to-peer network can manipulate the history information of its own members, and the damaging power increases dramatically with the group size. Such malicious nodes/collusions are difficult to detect, especially in a large network without a centralized authority. In this paper, we propose a new distributed incentive scheme, in which the amount that a node can benefit from the network is proportional to its contribution, malicious nodes can only attack others at the cost of their own interests, and a colluding group cannot gain advantage by cooperation regardless of its size. Consequently, the damaging power of colluding groups is strictly limited. The proposed scheme includes three major components: a distributed authority infrastructure, a key sharing protocol, and a contract verification protocol.  相似文献   

18.
WOW: Self-organizing Wide Area Overlay Networks of Virtual Workstations   总被引:1,自引:0,他引:1  
This paper describes WOW, a distributed system that combines virtual machine, overlay networking and peer-to-peer techniques to create scalable wide-area networks of virtual workstations for high-throughput computing. The system is architected to: facilitate the addition of nodes to a pool of resources through the use of system virtual machines (VMs) and self-organizing virtual network links; to maintain IP connectivity even if VMs migrate across network domains; and to present to end-users and applications an environment that is functionally identical to a local-area network or cluster of workstations. We describe IPOP, a network virtualization technique that builds upon a novel, extensible user-level decentralized technique to discover, establish and maintain overlay links to tunnel IP packets over different transports (including UDP and TCP) and across firewalls. We evaluate latency and bandwidth overheads of IPOP and also time taken for a new node to become fully-routable over the virtual network. We also report on several experiments conducted on a testbed WOW deployment with 118 P2P router nodes over PlanetLab and 33 VMware-based VM nodes distributed across six firewalled domains. Experiments show that the testbed delivers good performance for two unmodified, representative benchmarks drawn from the life-sciences domain. We also demonstrate that the system is capable of seamlessly maintaining connectivity at the virtual IP layer for typical client/server applications (NFS, SSH, PBS) when VMs migrate across a WAN.  相似文献   

19.
On Composing Stream Applications in Peer-to-Peer Environments   总被引:3,自引:0,他引:3  
Stream processing has become increasingly important as many emerging applications call for continuous real-time processing over data streams, such as voice-over-IP telephony, security surveillance, and sensor data analysis. In this paper, we propose a composable stream processing system for cooperative peer-to-peer environments. The system can dynamically select and compose stream processing elements located on different peers into user desired applications. We investigate multiple alternative approaches to composing stream applications: 1) global-state-based centralized versus local-state-based distributed algorithms for initially composing stream applications at setup phase. The centralized algorithm performs periodical global state maintenance while the distributed algorithm performs on-demand state collection. 2) Reactive versus proactive failure recovery schemes for maintaining composed stream applications during runtime. The reactive failure recovery algorithm dynamically recomposes a new stream application upon failures while the proactive approach maintains a number of backup compositions for failure recovery. We conduct both theoretical analysis and experimental evaluations to study the properties of different approaches. Our study illustrates the performance and overhead trade-offs among different design alternatives, which can provide important guidance for selecting proper algorithms to compose stream applications in cooperative peer-to-peer environments.  相似文献   

20.
《Computer Networks》2008,52(9):1675-1692
Many topology control algorithms aim to minimize energy consumption, interference, etc. while maintaining connectivity among the nodes. Links are realized between nodes by proper adjustment of transmission power in each node. However, a link is not useful if its bandwidth is lower than what is required by the applications it supports. Therefore, topology control algorithms should also consider the minimum realizable link bandwidth when generating topologies. As the realizable bandwidth of a link depends on the amount of interference received, interference analysis must be carried out. We study the minimum realizable link bandwidth of a given network under a TDMA/TDD channel sharing mechanism. We carried out the interference analysis assuming the general multi-sectored antenna configuration, since an omni-directional antenna can be visualized as a special configuration of a sectorized antenna (i.e. with 1 sector). We then proceed to derive the transmission power to be used in each sector of all nodes in the network to maximize average channel utilization. However, a brute-force method that runs through all possible topologies takes exponential time.We thus propose an algorithm, iMST, that attempts to maximize average channel utilization by reducing interference. The iMST algorithm not only generates k-edge-connected networks, but also guarantees minimum link bandwidth. Although iMST requires global knowledge of the locations of nodes, by using a distributed MST generation method, iMST can be run in a distributed manner. The iMST scheme is evaluated and compared against a modified Fault-tolerant Cone-Based Topology Control (FCBTC) scheme, referred to as s-FCBTC, that works with sectorized antennas, on a few performance metrics: average channel utilization, network diameter, minimum link bandwidth and edge-connectedness, where edge-connectedness is a new metric defined in this paper. The iMST scheme is shown to exhibit good performance in many of these performance metrics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号