首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider a class of queueing networks referred to as "generalized constrained queueing networks" which form the basis of several different communication networks and information systems. These networks consist of a collection of queues such that only certain sets of queues can be concurrently served. Whenever a queue is served, the system receives a certain reward. Different rewards are obtained for serving different queues, and furthermore, the reward obtained for serving a queue depends on the set of concurrently served queues. We demonstrate that the dependence of the rewards on the schedules alter fundamental relations between performance metrics like throughput and stability. Specifically, maximizing the throughput is no longer equivalent to maximizing the stability region; we therefore need to maximize one subject to certain constraints on the other. Since stability is critical for bounding packet delays and buffer overflow, we focus on maximizing the throughput subject to stabilizing the system. We design provably optimal scheduling strategies that attain this goal by scheduling the queues for service based on the queue lengths and the rewards provided by different selections. The proposed scheduling strategies are however computationally complex. We subsequently develop techniques to reduce the complexity and yet attain the same throughput and stability region. We demonstrate that our framework is general enough to accommodate random rewards and random scheduling constraints.  相似文献   

2.
Considernstations sharing a single communications channel. Each station has a buffer of length one. If the arrival rate of stationiis ri, then1-Pi_{i}(1- r_{i})is shown to be an upper bound (over all policies) on the throughput of the channel. Moreover, an optimal policy always exists and is stationary and periodic. The throughput of two policies, the random-policy, and the golden-ratio policy, are analyzed for a finite and infinite number of stations. The latter is shown to approach a limit which is within at least 98.4 percent of the upper bound.  相似文献   

3.
A finite-buffered banyan network analysis technique designed to model networks at high traffic loads is presented. The technique specially models two states of the network queues: queue empty and queue congested (roughly, zero or one slots free). A congested queue tends to stay congested because packets bound for the queue accumulate in the previous stage. The expected duration of this state is computed using a probabilistic model of a switching module feeding the congested queue. A technique for finding a lower arrival rate to an empty queue is also described. The queues themselves are modeled using independent Markov chains with an additional congested state added. The new analysis technique is novel in its modeling the higher arrival rate to a congested queue and a lower arrival rate to an empty queue. Comparison of queue state distributions obtained with the analysis and simulation shows that an important feature of congestion is modeled.  相似文献   

4.
In this paper, we consider a system modelled as an M/M/1 queue. Jobs corresponding to different classes are sent to the queue and are characterized by a delay cost per unit of time and a demand function. Our goal is to design an optimal pricing scheme for the queue, where the total charge depends on both the mean delay at the queue and arrival rate of each customer. We also assume that those two values have to be (statistically) measured, introducing errors on the total charge that might avert jobs from using the system, and then decrease demand. This model can be applied in telecommunication networks, where pricing can be used to control congestion, and the network can be characterized by a single bottleneck queue; the throughput of each class would be determined through passive measurements while the delay would be determined through active measurements.  相似文献   

5.
This paper studies the transient behavior of a Markov-molulated Poisson arrival queue under overload control. The queue has finite or infinite buffer capacity with multiple exponential servers. A Markov-modulated Poisson process is used to represent an aggregated voice or video packet arrival process in integrated service networks. By overload control, we mean to properly adapt the arrival process once the buffer contents exceed a designated level. The probability distribution of queue length as a function of time is obtained. The temporal effect of the overload control is measured in two forms. While in overload, we measure the amount of time for the queue to fall into underload. While in underload, we measure the amount of time for the queue to rise to overload. A proper design of the control will not only reduce the fall time but also increase the rise time. We also explore the transient queueing behavior as affected by time stochastic properties of the underlying Markov chain for the arrival process.  相似文献   

6.
This paper considers a parallel system of queues fed by independent arrival streams, where the service rate of each queue depends on the number of customers in all of the queues. Necessary and sufficient conditions for the stability of the system are derived, based on stochastic monotonicity and marginal drift properties of multiclass birth and death processes. These conditions yield a sharp characterization of stability for systems where the service rate of each queue is decreasing in the number of customers in other queues, and has uniform limits as the queue lengths tend to infinity. The results are illustrated with applications where the stability region may be nonconvex.  相似文献   

7.
A new analysis technique, dynamic-bubblesort analysis, is introduced for general K-queue first-in-first-out HFJ (homogenous fork/join queuing) systems of K⩾2 . The dynamic-bubblesort model dynamically sorts the branches of the queues based on the number of the tasks waiting for synchronization in each branch. Jobs arrive with mean rate λ and a general arrival distribution. Upon arrival, a job forks into K tasks. Task k, k=1, 2, ..., K, is assigned to the kth queuing system, which is a first-in-first-out server with a general service distribution and an infinite capacity queue. A job leaves the HFJ system as soon as all its tasks complete their service. In other words, tasks corresponding to the same job are joined before departing the HFJ system. We obtain a general and simple hybrid solution which combines analysis and simulation for the mean response time that we denote by TK. We obtain a very simple (as a function of T1 and T2 only) and general upper bound expression for TK and we get an exact relationship between the cases for K=2 and 3. We evaluate our results by simulating 2, 3, ..., 99, and 100 queues for p=0.1, 0.2, ....0.8, and 0.9, each for four different HFJ cases, where ρ=λ/μ and μ is the average task service rate for a server. The maximum absolute offset for our hybrid solutions from all the simulations is less than 0.33 percent (1/300), which is a reasonable error ratio for simulation. The maximum offset for our upper bounds over all the simulations is 21 percent  相似文献   

8.
Mean value analysis (MVA) is an efficient algorithm for determining the mean sojourn time, the mean queue length, and the throughput in a closed multiclass queueing network. It provides exact results for the class of product-form networks. Often different classes have different service requirements in FCFS queues, but such networks are not of product form. There are several possibilities to compute performance measure for such nodes and networks. In this paper we present an approximation formula for multiple-server FCFS queues with class-dependent service times as a Norton flow equivalent product node, where the departure rate of any class depends on the number of customers of all classes in the queue. We will use this approximation in the sojourn time formula of some exact and approximate MVA algorithms.  相似文献   

9.
Virtual-channel flow control   总被引:10,自引:0,他引:10  
Network throughput can be increased by dividing the buffer storage associated with each network channel into several virtual channels. Each physical channel is associated with several small queues, virtual channels, rather than a single deep queue. The virtual channels associated with one physical channel are allocated independently but compete with each other for physical bandwidth. Virtual channels decouple buffer resources from transmission resources. This decoupling allows active messages to pass blocked messages using network bandwidth that would otherwise be left idle. The paper studies the performance of networks using virtual channels using both analysis and simulation. These studies show that virtual channels increase network throughput, by a factor of four for 10-stage networks, and reduce the dependence of throughput on the depth of the network  相似文献   

10.
The concept of Quality of Service (QoS) networks has gained growing attention recently, as the traffic volume in the Internet constantly increases, and QoS guarantees are essential to ensure proper operation of most communication-based applications. A QoS switch serves m incoming queues by transmitting packets arriving to these queues through one output port, one packet per time step. Each packet is marked with a value indicating its priority in the network. Since the queues have bounded capacities and the rate of arriving packets can be much higher than the transmission rate, packets can be lost due to insufficient queue space. The goal is to maximize the total value of transmitted packets. This problem encapsulates two dependent questions: buffer management, namely which packets to admit into the queues, and scheduling, i.e. which queue to use for transmission in each time step. We use competitive analysis to study online switch performance in QoS-based networks. Specifically, we provide a novel generic technique that decouples the buffer management and scheduling problems. Our technique transforms any single-queue buffer management policy (preemptive or non-preemptive) to a scheduling and buffer management algorithm for our general m queues model, whose competitive ratio is at most twice the competitive ratio of the given buffer management policy. We use our technique to derive concrete algorithms for the general preemptive and non-preemptive cases, as well as for the interesting special cases of the 2-value model and the unit-value model. We also provide a 1.58-competitive randomized algorithm for the unit-value case. This case is interesting by itself since most current networks (e.g. IP networks) do not yet incorporate full QoS capabilities, and treat all packets equally.  相似文献   

11.
A discrete-time tandem network of cut-through queues is presented. The model allows finite capacity queues, blocking, and bursty traffic. A new bursty arrival process, IBK(k), for cut-through traffic is introduced. The tandem network is analyzed using single-node decomposition. Each queue is analyzed numerically in isolation assuming that its arrival and service processes are known. The parameters of the arrival and service processes of the queues are obtained using an iterative scheme. The results obtained are approximate and validation tests have shown that the model has good accuracy. Using this model, the packet loss, throughput, and queue length distributions were obtained for different traffic parameters and queue sizes.  相似文献   

12.
We consider a queue with the arrival process, the service time process and the service rate process as regenerative processes. We provide conditions for its stability, rates of convergence, finiteness of moments and functional limit theorems. This queue can model a queue serving ABR and UBR traffic in an ATM switch; a multiple access channel with TDMA or CDMA protocol and fading; a queue holding best effort or controlled and guaranteed traffic in a router in the integrated service architecture (ISA) of IP-based Internet and a scheduler in the router of a differentiated service architecture. In the process we also provide results for a queue with a leaky bucket controlled bandwidth scheduler. This result is of independent interest. We extend these results to feed-forward networks of queues. We also obtain the results when the arrival rate to the queue can be feedback controlled based on the congestion information in the queue (as in ABR service in the ATM networks or in the real time applications controlled by RTCP protocol in the Internet).  相似文献   

13.
Ali 《Performance Evaluation》2005,60(1-4):327-343
We consider a queueing system with a number of identical exponential servers. Each server has its own queue with unlimited capacity. The service discipline in each queue is first-come-first-served (FCFS). Customers arrive according to a state-dependent Poisson process with an arrival rate which is a non-increasing function of the number of customers in the system. Upon arrival, a customer must join a server’s queue according to a stationary state-dependent policy, where the state is taken to be the number of customers in servers’ queues. No jockeying among queues is allowed. Each arriving customer is limited to a generally distributed patience time after which it must depart the system and is considered lost. Two models of customer behavior are considered: deadlines until the beginning of service and deadlines until the end of service. We seek an optimal policy to assign an arriving customer to a server’s queue. We show that, when the distribution of customer impatience satisfies certain property, the policy of joining shortest queue (SQ) stochastically minimizes the number of lost customers during any finite interval in the long run. This property is shown to always hold for the case of deterministic customer impatience.  相似文献   

14.
We study a multiprocessing computer system which accepts parallel programs that have a fork-join computational paradigm. The multiprocessing computer system under study is modeled as K homogeneous servers, each with an infinite capacity queue. Parallel programs arrive at the multiprocessing system according to a series-parallel phase type interarrival process with mean arrival rate of h. Upon the program arrival, it forks into K-independent tasks and each task is assigned to an unique server. Each task's service time has a k-stage Erlang distribution with mean service time of λ. A parallel program is completed upon the completion of its last task. This kind of queuing model has no known closed form solution in the general (K⩾2) case. In this paper, we show that by carefully modifying the arrival and service distributions at some imbedded points in time, we can obtain tight performance bounds. We also provide a computational efficient algorithm for obtaining upper and lower bounds on the expected response time. The methodology is flexible and allows one to trade-off the tightness of the bounds and computational cost  相似文献   

15.
Processor-sharing queues are often used to model file transmission in networks. While sojourn time is a common performance metric in the queueing literature, average transmission rate is the more commonly discussed metric in the networking literature. Whereas much is known about sojourn times, there is little known about the average service rate experienced by jobs in processor-sharing queues. We focus here upon performance requirements in the form of an upper bound on the probability of failing to achieve a specified minimum transmission rate or a specified minimum average rate. For an M/G/l processor-sharing queue, we give a closed-form expression for this violation probability. We derive closed-form expressions for the marginal service rate with respect to the violation probability and to the minimum transmission rate, and characterize when each is binding. We then consider the effect of using connection access control by modeling an M/G/l/K processor-sharing queue, and discuss the relationship between queue service rate, queue limit, violation probability, and blocking probability. Finally, we consider a two-class discriminatory processor-sharing queue, and discuss what combinations of class weighting and service rate can be used to achieve specified minimum rate violation probabilities for both classes.  相似文献   

16.
Analytic queueing network models are being used to analyze various optimization problems such as server allocation, design and capacity issues, optimal routing, and workload allocation. The mathematical properties of the relevant performance measures, such as throughput, are important for optimization purposes and for insight into system performance.We show that for closed queueing networks of m arbitrarily connected single server queues with n customers, throughput, as a function of a scaled, constrained workload, is not concave. In fact, the function appears to be strictly quasiconcave. There is a constraint on the total workload that must be allocated among the servers in the network. However, for closed networks of two single server queues, we prove that our scaled throughput is concave when there are two customers in the network and strictly quasi-concave when there are more than two customers. The mathematical properties of both the scaled throughput and reciprocal throughput are demonstrated graphically for closed networks of two and three single server queues.  相似文献   

17.
We consider an infinite tandem network in which every node is capable of hearing its neighbors up to a given distancen. At any moment of time every node may contain in the top of its queue a message destined to one of its neighbors. This network can be used as a model for a microwave or optic link with many users. For small and largen we investigate the maximal selection of nodes in the network, for which their transmissions are collision-free. For a large hearing range we show that the upper bound on the maximal selection, which is found herein, is asymptotically achievable. For small hearing ranges we show that a greedy selection is better but not asymptotically optimal. We also specify a sequence of upper bounds which converge to the maximal throughput.  相似文献   

18.
We consider synchronization queues with finite or infinite buffers. There is one flow of tokens for each buffer. The input flow to each buffer, called a stream, is assumed to be a point process with finite intensity. Tokens arriving at each buffer have to wait to form a synchronized group and depart the system. We are concerned with output processes in this queue. The output processes of synchronization queues with some buffers having infinite capacities are studied. Further, we consider synchronization queues with two finite buffers and discuss the system state at output points in time. For these problems, some results that are already known are reviewed and some new results are introduced.  相似文献   

19.
A token passing ring can be described as a system of M queues with one server that rotates around the queues sequentially. Georgiadis-Szpankowski (1992) considered rings where the token (server) performs x ∇ lj services on queue j, where x is the size of queue j upon arrival of the token, and lj is a fixed limit of service for queue j. The token then spends some random time switching to the next queue. For j=1, ..., M, arrivals to queue j are Poisson with rate λj, and service times have mean s j and are independent of the arrival and switchover processes. The purpose of this paper is to give an alternate and simpler proof of the stability conditions given by Georgiadis-Szpankowski using Lyapunov functions. An additional assumption is made about the second moments of the service and switchover times being finite  相似文献   

20.
Most of the current communication networks, including the Internet, are packet switched networks. One of the main reasons behind the success of packet switched networks is the possibility of performance gain due to multiplexing of network bandwidth. The multiplexing gain crucially depends on the size of the buffers available at the nodes of the network to store packets at the congested links. However, most of the previous work assumes the availability of infinite buffer-size. In this paper, we study the effect of finite buffer-size on the performance of networks of interacting queues. In particular, we study the throughput of flow-controlled loss-less networks with finite buffers. The main result of this paper is the characterization of a dynamic scheduling policy that achieves the maximal throughput with a minimal finite buffer at the internal nodes of the network under memory-less (e.g., Bernoulli IID) exogenous arrival process. However, this ideal performance policy is rather complex and, hence, difficult to implement. This leads us to the design of a simpler and possibly implementable policy. We obtain a natural trade-off between throughput and buffer-size for such implementable policy. Finally, we apply our results to packet switches with buffered crossbar architecture  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号