首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
This paper describes a reservation protocol to provide real-time services to mobile users in an Integrated Services Packet Network. Mobility of hosts has significant impact on the quality of service provided to a real-time application. The currently proposed network system architecture and mechanisms to provide real-time services to fixed hosts are inadequate to accommodate the mobile hosts which can frequently change their point of attachments to the fixed network. Mobile hosts may experience wide variations of quality of service due to mobility. To reduce the impacts of mobility on QoS guarantees, a mobile host needs to make advance resource reservations at multiple locations it may possibly visit during the lifetime of the connection. The currently proposed reservation protocol in the Internet, RSVP, is not adequate to make such reservations for mobile hosts. In this paper, we describe a new reservation protocol, MRSVP, for supporting integrated services in a network with mobile hosts.  相似文献   

2.
Media‐centric networks deal with exchanging large media files between geographical distributed locations with strict deadlines. In such networks, resources need to be available at predetermined timeslots in the future and thus need to be reserved in advance, based on either flexible or fixed timeslot sizes. Reliability of the transfers is also important and can be attained by advance provisioning of redundant reservations. This, however, imposes additional costs, because redundant reservations are rarely in use, causing network resources to be wasted. Further adaptation and network utilization can be achieved at runtime by reutilizing unused reservations for transferring extra data as long as no failure has been detected. In this article, we design, implement, and evaluate a resilient advance bandwidth‐reservation approach based on flexible timeslots, in combination with a runtime adaptation approach. We take into account the specific characteristics of media transfers. Quality and complexity of the proposed approach have been extensively compared with that of a fixed timeslot algorithm. Our simulation results reveal that the highest admittance ratio and percentage of fully transferred requests in case of failures are almost always achieved by flexible timeslots, while the execution time of this approach is up to 17.5 times lower, compared with the approaches with fixed timeslot sizes.  相似文献   

3.
As the exponential growth of the Internet, there is an increasing need to provide different types of services for numerous applications. Among these services, low‐priority data transfer across wide area network has attracted much attention and has been used in a number of applications, such as data backup and system updating. Although the design of low‐priority data transfer has been investigated adequately in low speed networks at transport layer, it becomes more challenging for the design of low‐priority data transfer with the adaptation to high bandwidth delay product networks than the previous ones. This paper proposes an adaptive low‐priority protocol to achieve high utilization and fair sharing of links in high bandwidth delay product networks, which is implemented at transport layer with an end‐to‐end approach. The designed protocol implements an adaptive congestion control mechanism to adjust the congestion window size by appropriate amount of spare bandwidth. The improved congestion mechanism is intent to make as much use of the available bandwidth without disturbing the regular transfer as possible. Experiments demonstrate that the adaptive low‐priority protocol achieve efficient and fair bandwidth utilization, and remain non‐intrusive to high priority traffic. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
A mobile ad hoc network (MANET) is an autonomous collection of mobile nodes that communicate over relatively bandwidth‐constrained wireless links. MANETs need efficient algorithms to determine network connectivity, link scheduling, and routing. An important issue in network routing for MANETs is to conserve power while still achieve a high packet success rate. Traditional MANET routing protocols do not count for such concern. They all assume working with unlimited power reservoirs. Several ideas have been proposed for adding power‐awareness capabilities to ad hoc networks. Most of these proposals tackle the issue by either proposing new power‐aware routing protocols or modifying existing routing protocols through the deployment of power information as cost functions. None of them deal with counter‐measures that ought to be taken when nodes suffer from low power reserves and are subject to shut down in mid of normal network operations. In this paper, power‐awareness is added to a well‐known traditional routing protocol, the ad hoc on‐demand distance vector (AODV) routing protocol. The original algorithm is modified to deal with situations in which nodes experience low power reserves. Two schemes are proposed and compared with the original protocol using different performance metrics such as average end‐to‐end delays, transmission success rates, and throughputs. These schemes provide capabilities for AODV to deal with situations in which operating nodes have almost consumed their power reserves. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

5.
This paper proposes a novel transport network architecture for the next generation network (NGN) based on the optical burst switching technology. The proposed architecture aims to provide efficient delivery of various types of network traffic by satisfying their quality‐of‐service constraints. To this end, we have developed a soft‐state bandwidth reservation mechanism, which enables NGN transport nodes to dynamically reserve bandwidth needed for active data burst flows. The performance of the proposed mechanism is evaluated by means of numerical analysis and NS2 simulation. Our results show that the packet delay is kept within the constraint for each traffic flow and the burst loss rate is remarkably improved.  相似文献   

6.
Using circuit-switched optical networks for next generation e-science applications is gaining increasing interest. In such applications, circuits are provisioned for end hosts to accomplish data-intensive or QoS-stringent communication tasks. Existing provisioning methods provide point-to-point connectivity for end hosts, that is, an established circuit connects one end host to another, and during the lifetime of the circuit, only communication tasks between the connected end hosts can be served. This inhibits circuits from being used in more general cases, where each end host communicates with different remote parities simultaneously through a single network interface. We propose V-STONES - a data flow-based VLAN tagging and switching technique to increase the connectivity of end host network interfaces in circuit-switched networks. With V-STONES, not only can an IP end host communicate with different remote systems concurrently through bandwidth guaranteed connections, but also protocol entities at different stack layers can talk to their counterparts through dedicated bandwidth pipes. In this article, we review the existing circuit provisioning methods and then discuss V-STONES and the architecture of cross-layer circuit provisioning for end hosts. We also introduce a prototype implementation in an optical network testbed and present the experimental results.  相似文献   

7.
Due to increasing bandwidth capacities, the Internet has become a viable transport medium for a (live) video. Often, delivery of video streams relies on the client–server paradigm and therefore exhibits limited scalability. The Peer‐to‐Peer (P2P) network model is an attractive and scalable candidate to stream video content to end users. However, these P2P frameworks typically operate in a network agnostic mode. Introducing network topology information into these P2P frameworks offers opportunities to enhance the performance. In this paper, we introduce a model to include network information when streaming a (multilayered) video in P2P frameworks. An important metric for video stream providers is the content quality perceived by end users. The optimization studied here aims at maximizing the number of users receiving a high quality video. The paper addresses the optimization problem seen from the stream provider's viewpoint, having access to network topology information. An exact optimization approach is presented for benchmarking purposes and a heuristic approach to cope with realistic network sizes. In addition, we present an approach to decide the deployment location of peering functionality. The results show that our strategy significantly decreases the fraction of destinations receiving only the base layer, and by introducing extra peering functionality, network capacities are used more efficiently. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper, we analyse upper bounds on the end‐to‐end delay and the required buffer size at the leaky bucket and packet switches within the network in the context of the deterministic bandwidth allocation method in integrated services packet networks. Based on that formulation, we then propose a CAC method suitable to ISPN to guarantee the bounded end‐to‐end delay and loss‐free packet transmissions. As an example application, the GOP–CBR MPEG‐2 is considered. In that case, we also show tighter bounds by slightly modifying the coding method of GOP–CBR MPEG‐2. Using the actual traced data of GOP–CBR MPEG‐2, we discuss the applicabilities of our analytical results and proposed CAC by comparing with simulation. Numerical results show that the loose upper bounds can also achieve more utilization even in the context of deterministic bandwidth allocation compared with the peak bandwidth allocation strategy. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

9.
Network resources dimensioning and traffic engineering influence the quality in provisioned services required by the Expedited Forwarding (EF) traffic in production networks established through DiffServ over MPLS‐enabled network. By modeling EF traffic flows and the excess of network resources reserved for it, we derive the range of delay values which are required to support these flows at DiffServ nodes. This enables us to develop an end‐to‐end (e2e) delay budget‐partitioning mechanism and traffic‐engineering techniques within a framework for supporting new premium QoS levels, which are differentiated based on e2e delay, jitter and loss. This framework enables ingress routers to control EF traffic flow admission and select appropriate routing paths, with the goal of EF traffic balancing, avoiding traffic congestion and getting the most use out of the available network resources through traffic engineering. As a result, this framework should enable Internet service providers to provide three performance levels of EF service class to their customers provided that their network is DiffServ MPLS TE aware. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

10.
The emergence of new kinds of applications and technologies (e.g., data‐intensive applications, server virtualization, and big data technology) has led to a higher utilization of network resources. These services imply increased bandwidth consumption and unexpected congestions, especially in backbones. In this article, a novel proposal is studied with the aim of improving the performance of prioritized forwarding equivalence classes in congested Multiprotocol Label Switching Transport Profile (MPLS‐TP) domains. The congestion impact on those QoS‐aware services that require high reliability and low delay is analyzed. A new policy has been implemented on MPLS‐TP, which is a technology that provides QoS by means of flow differentiation in the Internet backbones. The proposal is known as Gossip‐based local recovery policy and is offered as an operation, administration, and management function to allow local recovery of lost traffic for MPLS‐TP privileged forwarding equivalence classes. In order to fulfill the requirements for implementation on MPLS‐TP, a minimum set of extensions to resource reservation protocol traffic engineering has also been proposed to provide self‐management capable routes. Finally, we have carried out a performance improvement measurement by means of an analytical model and simulations. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
Neogi  A. Chiueh  T. Stirpe  P. 《IEEE network》1999,13(5):56-63
RSVP is a bandwidth reservation protocol that allows distributed real-time applications such as videoconferencing software to make bandwidth reservations over packet-switched networks. Coupled with real-time scheduling mechanisms built into packet routers, the network guarantees to provide the reserved bandwidth throughout the lifetime of the applications. Although guaranteed services are of great value to both end users and carrier providers, their performance cost, due to additional control and data processing overhead, can potentially have a negative impact on the packet throughput and latency of RSVP-capable routers. The goal of this article is to examine the performance cost of RSVP based on measurements from an industrial-strength RSVP implementation on a commercial IP router. The focus is on the detailed evaluation of the performance implications of various architectural decisions in RSVP. We found that RSVP's control messages do not incur significant overhead in terms of processing delay and bandwidth consumption. However, the performance overhead of real-time packet scheduling is noticeable in the presence of a large number of real-time connections. In extreme cases, the performance guarantees of existing real-time connections may not be kept, and some best-effort packets are actually dropped, although the overall bandwidth requirement from these connections is smaller than the available link bandwidth  相似文献   

12.
This paper presents a QoS (quality of service) aware routing and power control algorithm consuming low transmission power for multimedia service over mobile ad hoc network. Generally, multimedia services need stringent QoS over the network. However, it is not easy to guarantee the QoS over mobile ad hoc network since its network resources are very limited and time‐varying. Furthermore, only a limited amount of power is available at mobile nodes, which makes the problem more challenging. We propose an effective routing and power control algorithm for multimedia services that satisfies end‐to‐end delay constraint with low transmission power consumption. The proposed algorithm supports the required bandwidth by controlling each link channel quality over route in a tolerable range. In addition, a simple but effective route maintenance mechanism is implemented to avoid link failures that may significantly degrade streaming video quality. Finally, performance comparison with existing algorithms is presented in respect to traditional routing performance metrics, and an achievable video quality comparison is provided to demonstrate the superiority of the proposed algorithm for multimedia services over mobile ad hoc network. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
Cloud computing services delivery and consumption model is based on communication infrastructure (network). The network serves as a linkage between the end‐users consuming cloud services and the providers of data centers providing the cloud services. In addition, in large‐scale cloud data centers, tens of thousands of compute and storage nodes are connected by a data center network to deliver a single‐purpose cloud service. To this end, some questions could be raised, such as the following: How do network architectures affect cloud computing? How will network architecture evolve to support better cloud computing and cloud‐based service delivery? What is the network's role in reliability, performance, scalability, and security of cloud computing? Should the network be a dumb transport pipe or an intelligent stack that is cloud workload aware? This paper focuses on the networking aspect in cloud computing and shall provide insights to these questions. Researchers can use this paper to accelerate their research on devising mechanisms for the following: (i) provisioning cloud network as a service and (ii) engineering network of data centers. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
One of the major problems of deploying Resource ReSerVation Protocol (RSVP) in mobile environments is called the advance resource reservation (ARR) problem. Conventional solutions to this problem waste too many network resources and increase the new Quality of Service (QoS) session blocking probability. In this paper, we propose a reservation optimised ARR scheme which constrains the number of advance reservation paths in a subnet and only allows the most eligible mobile nodes to make advance reservations. Furthermore, to evaluate the performance of the schemes, we build Markovian models of different ARR schemes using a formal performance modelling formalism named Performance Evaluation Process Algebra (PEPA). Our results indicate that the proposed reservation optimised ARR scheme can effectively balance the active and passive reservation blocking probabilities and achieves a better utilisation of the network resources, especially when the traffic intensity is high.  相似文献   

15.
Wireless sensor networks (WSNs) are characterized by their low bandwidth, limited energy, and largely distributed deployment. To reduce the flooding overhead raised by transmitting query and data information, several data‐centric storage (DCS) mechanisms are proposed. However, the locations of these data‐centric nodes significantly impact the power consumption and efficiency for information queries and storage capabilities, especially in a multi‐sink environment. This paper proposes a novel dissemination approach, which is namely the dynamic data‐centric routing and storage mechanism (DDCRS), to dynamically determine locations of data‐centric nodes according to sink nodes' location and data collecting rate and automatically construct shared paths from data‐centric nodes to multiple sinks. To save the power consumption, the data‐centric node is changed when new sink nodes participate when the WSNs or some queries change their frequencies. The simulation results reveal that the proposed protocol outperforms existing protocols in terms of power conservation and power balancing. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
This paper presents a terminal‐assisted frame‐based packet reservation multiple access (TAF‐PRMA) protocol, which optimizes random access control between heterogeneous traffic aiming at more efficient voice/data integrated services in dynamic reservation TDMA‐based broadband access networks. In order to achieve a differentiated quality‐of‐service (QoS) guarantee for individual service plus maximal system resource utilization, TAF‐PRMA independently controls the random access parameters such as the lengths of the access regions dedicated to respective service traffic and the corresponding permission probabilities, on a frame‐by‐frame basis. In addition, we have adopted a terminal‐assisted random access mechanism where the voice terminal readjusts a global permission probability from the central controller in order to handle the ‘fair access’ issue resulting from distributed queuing problems inherent in the access network. Our extensive simulation results indicate that TAF‐PRMA achieves significant improvements in terms of voice capacity, delay, and fairness over most of the existing medium access control (MAC) schemes for integrated services.  相似文献   

17.
Data centers play a crucial role in the delivery of cloud services by enabling on‐demand access to the shared resources such as software, platform and infrastructure. Virtual machine (VM) allocation is one of the challenging tasks in data center management since user requirements, typically expressed as service‐level agreements, have to be met with the minimum operational expenditure. Despite their huge processing and storage facilities, data centers are among the major contributors to greenhouse gas emissions of IT services. In this paper, we propose a holistic approach for a large‐scale cloud system where the cloud services are provisioned by several data centers interconnected over the backbone network. Leveraging the possibility to virtualize the backbone topology in order to bypass IP routers, which are major power consumers in the core network, we propose a mixed integer linear programming (MILP) formulation for VM placement that aims at minimizing both power consumption at the virtualized backbone network and resource usage inside data centers. Since the general holistic MILP formulation requires heavy and long‐running computations, we partition the problem into two sub‐problems, namely, intra and inter‐data center VM placement. In addition, for the inter‐data center VM placement, we also propose a heuristic to solve the virtualized backbone topology reconfiguration computation in reasonable time. We thoroughly assessed the performance of our proposed solution, comparing it with another notable MILP proposal in the literature; collected experimental results show the benefit of the proposed management scheme in terms of power consumption, resource utilization and fairness for medium size data centers. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
Adequately providing fault tolerance while using network capacity efficiently is a major topic of research in optical networks. In order to improve the network utilization, grooming of low‐rate connections in optical networks has been usually performed at the edge of the network. However, in all‐optical networks once a channel is assigned, its entire capacity is dedicated to the users independently of its grooming capabilities. As current users don't usually require such big capacities, bandwidth inefficiencies still occur. In this paper we address this issue introducing unlimited grooming per link (UGPL), a new restoration mechanism for opaque mesh optical networks that grooms connections on a per‐link basis. Simulation results show that UGPL provides the best bandwidth efficiency and the best blocking probability compared to traditional 1 + 1 protection and 1 : N end‐to‐end sharing schemes. Furthermore, we show that the 1 : N end‐to‐end restoration scheme provides no benefits over the simpler and faster 1 + 1 protection scheme. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

19.
Traffic load balancing in data centers is an important requirement. Traffic dynamics and possibilities of changes in the topology (e.g., failures and asymmetries) make load balancing a challenging task. Existing end‐host–based schemes either employ the predominantly used ECN or combine it with RTT to get congestion information of paths. Both congestion signals, ECN and RTT, have limitations; ECN only tells whether the queue length is above or below a threshold value but does not inform about the extent of congestion; similarly, RTT in data center networks is on the scale of up to few hundreds of microseconds, and current data center operating systems lack fine‐grained microsecond‐level timers. Therefore, there is a need of a new congestion signal which should give accurate information of congestion along the path. Furthermore, in end‐host–based schemes, detecting asymmetries in the topology is challenging due to the inability to accurately measure RTT on the scale of microseconds. This paper presents QLLB, an end‐host–based, queue length–based load balancing scheme. QLLB employs a new queue length–based congestion signal that gives an exact measure of congestion along the paths. Furthermore, QLLB uses relative‐RTT to detect asymmetries in the topology. QLLB is implemented in ns‐3 and compared with ECMP, CONGA, and Hermes. The results show that QLLB significantly improves performance of short flows over the other schemes and performs within acceptable level, of CONGA and Hermes, for long flows. In addition, QLLB effectively detects asymmetric paths and performs better than Hermes under high loads.  相似文献   

20.
We propose an opportunistic cross‐layer architecture for adaptive support of Voice over IP in multi‐hop wireless LANs. As opposed to providing high call quality, we target emergencies where it is important to communicate, even if at low quality, no matter the harshness of the network conditions. With the importance of delay on voice quality in mind, we select adaptation parameters that control the ratio of real‐time traffic load to available bandwidth. This is achieved in two ways: minimizing the load and maximizing the bandwidth. The PHY/MAC interaction improves the use of the spectral resources by opportunistically exploiting rate‐control and packet bursts, while the MAC/application interaction controls the demand per source through voice compression. The objective is to maximize the number of calls admitted that satisfy the end‐to‐end delay budget. The performance of the protocol is studied extensively in the ns‐2 network simulator. Results indicate that call quality degrades as load increases and overlonger paths, and a larger packet size improves performance. For long paths having low‐quality channels, forward error correction, header compression, and relaxing the delay budget of the system are required to maintain call admission and quality. The proposed adaptive protocol achieves high performance improvements over the traditional, non‐adaptive approach. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号