首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A number of studies report that ICT sectors are responsible for up to 10% of the worldwide power consumption and that a substantial share of such amount is due to the Internet infrastructure. To accommodate the traffic in the peak hours, Internet Service Providers (ISP) have overprovisioned their networks, with the result that most of the links and devices are under-utilized most of the time. Thus, under-utilized links and devices may be put in a sleep state in order to save power and that might be achieved by properly routing traffic flows. In this paper, we address the design of a joint admission control and routing scheme aiming at maximizing the number of admitted flow requests while minimizing the number of nodes and links that need to stay active. We assume an online routing paradigm, where flow requests are processed one-by-one, with no knowledge of future flow requests. Each flow request has requirements in terms of bandwidth and m additive measures (e.g., delay, jitter). We develop a new routing algorithm, E2-MCRA, which searches for a feasible path for a given flow request that requires the least number of nodes and links to be turned on. The basic concepts of E2-MCRA are look-ahead, the depth-first search approach and a path length definition as a function of the available bandwidth, the additive QoS constraints and the current status (on/off) of the nodes and links along the path. Finally, we present the results of the simulation studies we conducted to evaluate the performance of the proposed algorithm.  相似文献   

2.
The popularity and availability of Internet connection has opened up the opportunity for network-centric collaborative work that was impossible a few years ago. Contending traffic flows in this collaborative scenario share different kinds of resources such as network links, buffers, and router CPU. The goal should hence be overall fairness in the allocation of multiple resources rather than a specific resource. In this paper, firstly, we present a novel QoS-aware resource scheduling algorithm called Weighted Composite Bandwidth and CPU Scheduler (WCBCS), which jointly allocates the fair share of the link bandwidth as well as processing resource to all competing flows. WCBCS also uses a simple and adaptive online prediction scheme for reliably estimating the processing times of the incoming data packets. Secondly, we present some analytical results, extensive NS-2 simulation work, and experimental results from our implementation on Intel IXP2400 network processor. The simulation and implementation results show that our low complexity scheduling algorithm can efficiently maximise the CPU and bandwidth utilisation while maintaining guaranteed Quality of Service (QoS) for each individual flow.  相似文献   

3.
In the current Internet environment, a lot of multimedia information is navigated on the on-line computer systems. Among the multimedia information, video sequence has the most valuable and meaningful influence on human emotions. Therefore, one human’s emotions to see and feel the same video can be different from that of others depending on the person’s mental state. In this research, we propose a new real-time emotion retrieval scheme in video with image sequence features. The features of image sequence consist of color information, key frame extraction, video sound, and optical flow. Each video feature is combined with the weight for the emotion retrieval. The experimental results show the new approach of real-time emotion retrieval in video with the better results compared to the previous studies. The proposed scheme will be applied to the many multimedia fields: movie, computer game, video conference, and so on.  相似文献   

4.
《Computer Networks》2007,51(14):4153-4173
During the last two decades, several value-added services (e.g., IP multicast, IP traceback, etc.) have been proposed to extend the functional capabilities of the Internet. Due to the increasing role of these services, there is a need to better understand their impact on the network. In this paper, we present an experimental study on the intersection characteristics of end-to-end Internet paths and trees. We analyze these characteristics to understand the scale and the distribution of “state overhead” that is incurred on the routers by various value-added network services. For the reliability of our analysis, a representative, end-to-end router-level Internet map is essential. Although several maps are available, they are at best insufficient for our analysis. Therefore, in the first part of our work, we exert a measurement study and present an alternative approach to obtain an end-to-end router-level map conforming to our constraints. In the second part, we conduct various experiments using our map and shed some light on the scale and distribution of the state overhead of value-added Internet services in both unicast and multicast environments.  相似文献   

5.
In this paper we investigate the dynamics of totally asymmetric exclusion process (TASEPs) on a one-dimensional lattice with long-range hopping and parallel update. The model is inspired by information traffic over networks. Compared to other TASEP models, particles in our model will firstly try to hop to the last site with a probability q. The probability q may reflect network reliability or available bandwidth. A mean-field approach is developed to deal with the long-range hopping. Theoretical calculations for stationary particle currents, density profiles and a phase diagram have been obtained. There are two possible stationary phases (LD and HD) in the system, corresponding to free flow and jammed flow, respectively. Interestingly, bulk density in the LD phase tends to zero, while in the HD phase it is the same as that of the normal TASEP. The LD and HD phase regions are obtained numerically. The steady-state currents and density profiles obtained from computer simulations show a very good agreement with theoretical predictions. The model and theoretical results may provide a better understanding of information traffic flow over networks.  相似文献   

6.
With the approach of the 21st Century, computer and software technologies have changed the very fabric of our society. Beyond the sheer advances in processor speed, the most dramatic, far-reaching changes have occurred in the way we transfer information. Through the medium of the Internet, information can now be transmitted instantaneously from one end of the planet to the other. The future of information technology is bright, to be sure, but not without its challenges. For the Internet to enable us to achieve all our goals, it must gain a real-time context of who we are, our state and what we are doing. To this end, we must decouple context from the GUI and move it into a system built to hold it: a context engine. This system will let each of us organize our information according to the way we think, by association  相似文献   

7.
《Computer Networks》2000,32(2):185-209
This paper presents a Differentiated Services (Diffserv or DS) architecture for multimedia streaming applications. Specifically, we define two types of services in the context of Assured Forwarding (AF) per hop behavior (PHB) that are differentiated in terms of reliability of packet delivery: the High Reliable (HR) service and the Less Assured (LA) service. We propose a novel node mechanism called Selective Pushout with Random Early Detection (SPRED) that is capable of simultaneously achieving the following four objectives: (1) a core router does not maintain any state information for each flow (i.e., core-stateless); (2) the packet sequence within each flow is not re-ordered at a node; (3) packets from HR service are delivered more reliably than packets from LA service at a node during congestion; and (4) packets from TCP traffic are dropped randomly to avoid global synchronization during congestion. We show that SPRED is a generalized buffer management algorithm of both tail-dropping and Random Early Detection (RED), and combines the best features of pushout (PO), RED and RED with In/Out (RIO) mechanisms. Simulation results demonstrate that under the same link speed and network topology, network nodes employing our Diffserv architecture have substantial performance improvement over the current Best Effort (BE) Internet architecture for multimedia streaming applications.  相似文献   

8.
Wireless Mesh Networks (WMNs) extend Internet access in areas where the wired infrastructure is not available. A problem that arises is the congestion around gateways, delayed access latency and low throughput. Therefore, object replication and placement is essential for multi-hop wireless networks. Many replication schemes are proposed for the Internet, but they are designed for CDNs that have both high bandwidth and high server capacity, which makes them unsuitable for the wireless environment. Object replication has received comparatively less attention from the research community when it comes to WMNs. In this paper, we propose an object replication and placement scheme for WMNs. In our scheme, each mesh router acts as a replica server in a peer-to-peer fashion. The scheme exploits graph partitioning to build a hierarchy from fine-grained to coarse-grained partitions. The challenge is to replicate content as close as possible to the requesting clients and thus reduce the access latency per object, while minimizing the number of replicas. Using simulation tests, we demonstrate that our scheme is scalable, performing well with respect to the number of replica servers and the number of objects. The simulation results show that our proposed scheme has better performance compared to other replication schemes.  相似文献   

9.
Despite the huge success of the Internet in providing basic communication services, its economic architecture needs to be upgraded so as to provide end-to-end guaranteed or more reliable services to its customers. Currently, a user or an enterprise that needs end-to-end bandwidth guarantees between two arbitrary points in the Internet for a short period of time has no way of expressing its needs. To allow these much needed basic services, we propose a single-domain edge-to-edge (g2g) dynamic capacity contracting mechanism, where a network customer can enter into a bandwidth contract on a g2g path at a future time, at a predetermined price. For practical and economic viability, such forward contracts must involve a bailout option to account for bandwidth becoming unavailable at service delivery time, and must be priced appropriately to enable Internet Service Providers (ISPs) manage risks in their contracting and investments. Our design allows ISPs to advertise point-to-point different prices for each of their g2g paths instead of the current point-to-anywhere prices, allowing discovery of better end-to-end paths, temporal flexibility and efficiency of bandwidth usage. We compute the risk-neutral prices for these g2g bailout forward contracts (BFCs), taking into account correlations between different contracts due to correlated demand patterns and overlapping paths. We apply this multiple g2g BFC framework on network models with Rocketfuel topologies. We evaluate our contracting mechanism in terms of key network performance metrics like fraction of bailouts, revenue earned by the provider, and adaptability to link failures. We also explore the tradeoffs between complexity of pricing and performance benefits of our BFC mechanism.  相似文献   

10.
Video Streaming across wide-area networks is one of the most important applications on the Internet. In this paper we focus on the quality assurance issue on best-effort networks and propose a practical technique, named staggered two-flow video streaming. We deliver a stored video through two separate flows in a staggered fashion via a VPN pipe from a central server to a proxy server. One flow containing the essential portion of the video is delivered using a novel controlled TCP (cTCP), and the other flow containing the enhanced portion of the video is transmitted using a rate-controlled RTP/UDP (rUDP). To provide video-quality assurance in such a system, we design several application-aware flow-control and adaptation approaches to control bandwidth sharing and interactions among flows by exploiting the inherent priority structure in videos, the storage space on proxy servers and the coarse-grain bandwidth assurance of VPN. Our experiments using FreeBSD and simulations on NS2 both have demonstrated the efficacy of the proposed technique in protecting essential data and significantly reducing the numbers of packets retransmitted/lost in transmission and the sizes of video prefixes required on proxy servers. In summary, our application-aware approach provides stable and predictable performance in streaming videos across wide-area best-effort networks. In addition, another salient feature of our approach is that it requires no changes on the client-receiving side and minimal changes on the server-sending side.  相似文献   

11.
《Computer Networks》2003,41(4):505-526
The IETF is currently working on service differentiation in the Internet. However, in wireless environments where bandwidth is scarce and channel conditions are variable, IP differentiated services are sub-optimal without lower layers’ support.In this paper we present four service differentiation schemes for IEEE 802.11. The first one is based on scaling the contention window according to the priority of each flow or user. For different users with different priorities, the second, the third and the fourth mechanisms assign different minimum contention widow values, different interframe spacings and different maximum frame lengths respectively.We simulate and analyze the performance of each scheme with Transport Control Protocol and User Datagram Protocol flows.  相似文献   

12.
Persistently saturated links are abnormal conditions that indicate bottlenecks in Internet traffic. Network operators are interested in detecting such links for troubleshooting, to improve capacity planning and traffic estimation, and to detect denial-of-service attacks. Currently bottleneck links can be detected either locally, through SNMP information, or remotely, through active probing or passive flow-based analysis. However, local SNMP information may not be available due to administrative restrictions, and existing remote approaches are not used systematically because of their network or computation overhead. This paper proposes a new approach to remotely detect the presence of bottleneck links using spectral and statistical analysis of traffic. Our approach is passive, operates on aggregate traffic without flow separation, and supports remote detection of bottlenecks, addressing some of the major limitations of existing approaches. Our technique assumes that traffic through the bottleneck is dominated by packets with a common size (typically the maximum transfer unit, for reasons discussed in Section 5.1). With this assumption, we observe that bottlenecks imprint periodicities on packet transmissions based on the packet size and link bandwidth. Such periodicities manifest themselves as strong frequencies in the spectral representation of the aggregate traffic observed at a downstream monitoring point. We propose a detection algorithm based on rigorous statistical methods to detect the presence of bottleneck links by examining strong frequencies in aggregate traffic. We use data from live Internet traces to evaluate the performance of our algorithm under various network conditions. Results show that with proper parameters our algorithm can provide excellent accuracy (up to 95%) even if the traffic through the bottleneck link accounts for less than 10% of the aggregate traffic.  相似文献   

13.
Mobile surveillance service is regarded as one of the Internet applications to which much attention is recently given. However, the time and cost problem resulting from using heterogeneous platforms and proprietary protocols must be a burden to developing such systems and expanding their services. In this paper, we present a framework of mobile surveillance service for smartphone users. It includes the design and implementation of a video server and a mobile client called smartphone watch. A component-based architecture is employed for the server and client for easy extension and adaptation. We also employ the well-known standard web protocol HTTP to provide higher compatibility and portability than using a proprietary one. Three different video transmission modes are provided for efficient usage of limited bandwidth resource. We demonstrate our approach via real experiments on a commercial smartphone.  相似文献   

14.
We consider an Internet Service Provider’s (ISP’s) problem of providing end-to-end (e2e) services with bandwidth guarantees, using a path-vector based approach. In this approach, an ISP uses its edge-to-edge (g2g) single-domain contracts and vector of contracts purchased from neighboring ISPs as the building blocks to construct, or participate in constructing, an end-to-end “contract path”. We develop a spot-pricing framework for the e2e bandwidth guaranteed services utilizing this path contracting strategy, by formulating it as a stochastic optimization problem with the objective of maximizing expected profit subject to risk constraints. In particular, we present time-invariant path contracting strategies that offer high expected profit at low risks, and can be implemented in a fully distributed manner. Simulation analysis is employed to evaluate the contracting and pricing framework under different network and market conditions. An admission control policy based on the path contracting strategy is developed and its performance is analyzed using simulations.  相似文献   

15.
A digital signature is an important type of authentication in a public-key (or asymmetric) cryptographic system, and it is widely used in many digital government applications. We, however, note that the performance of an Internet server computing digital signatures online is limited by the high cost of modular arithmetic. One simple way to improve the performance of the server is to reduce the number of computed digital signatures by combining a set of documents into a batch in a smart way and signing each batch only once. This approach could reduce the demand on the CPU but require more network bandwidth of sending extra information to clients.In this paper, we investigate performance of different online digital signature batching schemes. That is, we provide a framework for studying as well as analyzing performance of a variety of such schemes. The results show that substantial computational benefits can be obtained from batching without significant increases in the amount of additional information that needs to be sent to the clients. Furthermore, we explore the potential benefits of considering more sophisticated batching schemes. The proposed analytical framework uses a semi-Markov model of a batch-based digital signature server. Through the emulation and the simulation, the results show the accuracy and effectiveness of our proposed analytic framework.  相似文献   

16.
This article raises questions about the evaluation process for composition faculty who use computer and Internet technologies in the classroom, and for distance learning. In particular, I discuss the panoptic effect made possible by the accessibility of our class Web pages to administrators, and provide an example situation from which we might learn. I conclude with a set of practical recommendations on evaluating those who work with computer and Internet technologies for faculty and their departments.  相似文献   

17.
The problem of bandwidth allocation in computer networks can be likened to the supply–demand problem in economics. This paper presents the economic generalized particle model (EGPM) approach to intelligent allocation of network bandwidth. EGPM is a significant extension and further development of the generalized particle model (GPM) [1]. The approach comprises two major components: (1) dynamic allocation of network bandwidth based on GPM; and (2) dynamic modulation of price and demands of network bandwidth. The resulting algorithm can be easily implemented in a distributed fashion. Pricing being the network control mechanism in EGPM is carried out by a tatonnement process. We discuss the EGPM’s convergence and show that the approach is efficient in achieving the global Pareto optimum. Via simulations, we test the approach, analyze its parameters and compare it with GPM and a genetic-algorithm-based solution.  相似文献   

18.
This article deals with testing distributed real-time systems. More precisely, we propose: (1) a model for describing the specification of the implementation under test, (2) a distributed architecture of the test system (TS), (3) an approach for coordinating the testers which constitute the TS, and (4) a procedure for deriving automatically the test sequence of each tester from a global test sequence. In comparison with a very recent work, the proposed test method has the following important advantage: the clocks used by the different testers are not assumed perfectly synchronized. Rather, we assume more realistically that each clock is synchronized with a reference clock with a given (nonnull) inaccuracy. This advantage is very relevant if, for example, the testers communicate through the Internet.  相似文献   

19.
We present a novel bandwidth broker architecture for scalable support of guaranteed services that decouples the QoS control plane from the packet forwarding plane. More specifically, under this architecture, core routers do not maintain any QoS reservation states, whether per-flow or aggregate. Instead, the QoS reservation states are stored at and managed by a bandwidth broker. There are several advantages of such a bandwidth broker architecture. Among others, it avoids the problem of inconsistent QoS states faced by the conventional hop-by-hop, distributed admission control approach. Furthermore, it allows us to design efficient admission control algorithms without incurring any overhead at core routers. The proposed bandwidth broker architecture is designed based on a core stateless virtual time reference system developed recently. This virtual time reference system provides a unifying framework to characterize, in terms of their abilities to support delay guarantees, both the per-hop behaviors of core routers and the end-to-end properties of their concatenation. We focus on the design of efficient admission control algorithms under the proposed bandwidth broker architecture. We consider both per-flow end-to-end guaranteed delay services and class-based guaranteed delay services with flow aggregation. Using our bandwidth broker architecture, we demonstrate how admission control can be done on a per domain basis instead of on a "hop-by-hop" basis. Such an approach may significantly reduce the complexity of the admission control algorithms. In designing class-based admission control algorithms, we investigate the problem of dynamic flow aggregation in providing guaranteed delay services and devise a new apparatus to effectively circumvent this problem. We conduct detailed analyses to provide theoretical underpinning for our schemes as well as to establish their correctness. Simulations are also performed to demonstrate the efficacy of our schemes.  相似文献   

20.
Novel automatized management systems for optical WDM networks promise to allow customers asking for a connection (i.e., a bandwidth service) to specify on-demand the terms of the Service Level Agreement (SLA) to be guaranteed by the Network Operator (NO). In this work, we exploit the knowledge, among the other Service Level Specifications (SLS), of the holding time and of the availability target of the connections to operate shared-path protection in a more effective manner.In the proposed approach, for each connection we monitor the actual downtime experienced by the connection, and, when the network state changes (typically, for a fault occurrence, or a connection departure or arrival), we estimate a new updated availability target for each connection based on our knowledge of all the predictable network-state changes, i.e., the future connection departures. Since some of the connections will be ahead of the stipulated availability target in their SLA (credit), while other connections will be behind their availability target (debit), we propose a mechanism that allows us to “trade” availability “credits” and “debits”, by increasing or decreasing the shareability level of the backup capacity. Our approach permits to flexibly manage the availability provided to living connections during their holding times.The quality of the provided service is evaluated in terms of availability as well as probability of violation of availability target stipulated in the SLA (also called SLA Violation Risk), a recently-proposed metric that has been demonstrated to guarantee higher customer satisfaction than the classical statistical availability. For a typical wavelength-convertible US nationwide network, our approach obtains significative savings on Blocking Probability (BP), while reducing the penalties due to SLA violations. We also analytically demonstrate the proposed scheme can be highly beneficial if the monitored metric is the SLA Violation Risk instead of the availability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号