首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A communication network is modeled by a weighted graph. The vertices of the graph represent stations with storage capabilities, while the edges of the graph represent communication channels (or other information processing media). Channel capacity weights are assigned to the edges of the network. The network is assumed to operate in a store-and-forward manner, so that when a channel is busy the messages directed into it are stored at the station, where it joins a queue that is governed by a first-come first-served service discipline. Assuming that fixed-length messages arrive at random at the network, following the statistics of a Poisson point process, we solve for the steady-state distributions of the message overall delay time, for the average message waiting times at the individual stations, for the average memory size requirements at the stations, as well as for other statistical characteristics of the message flow along a communication path.  相似文献   

2.
The hyper hexa-cell (HHC) interconnection network is relatively a new interconnection network, which is constructed from hexa-cell and hypercube topologies. HHC is developed to support parallel algorithms for solving computation and communication intensive problems. Thus, in order to support solving these problems efficiently, this paper presents two broadcast communication operations for HHC using store-and-forward technique; namely, one-to-all and all-to-all broadcast operations; which allow a message to be transmitted through the shortest path from the source node to all other nodes. The broadcast communication operations are evaluated analytically in terms of communication steps, communication latency, communication cost, and speed over HHC and wraparound square mesh interconnection networks. The analytical results over HHC show that in both one-to-all and all-to-all broadcast operations, the maximum communication steps are \( d + 2\), where d is the dimension of HHC, which is a small integer number. Also, the communication latency increases linearly when the network size increases, and the communication cost for the broadcast operations is equal to O(\(2^{d-1})\). Moreover, the speed of both broadcast operations on d-dimensional HHC is \(d+2\times \) speed of electronic links. In comparison between HHC and wraparound square mesh, one-to-all and all-to-all broadcast communication operations over HHC perform much better than mesh in terms of maximum communication steps, communication latency, and speed.  相似文献   

3.
Bottleneck Flow Control   总被引:2,自引:0,他引:2  
The problem of optimally choosing message rates for users of a store-and-forward network is analyzed. Multiple users sharing the links of the network each attempt to adjust their message rates to achieve an ideal network operating point or an "ideal tradeoff point between high throughput and low delay." Each user has a fixed path or virtual circuit. In this environment, a basic definition of "ideal delay-throughput tradeoff" is given and motivated. This definition concentrates on a fair allocation of network resources at network bottlenecks. This "ideal policy" is implemented via a decentralized algorithm that achieves the unique set of optimal throughputs. All sharers constrained by the same bottleneck are treated fairly by being assigned equal throughputs. A generalized definition of ideal tradeoff is then introduced to provide more flexibility in the choice of message rates. With this definition, the network may accommodate users with different types of message traffic. A transformation technique reduces the problem of optimizing this performance measure to the problem of optimizing the basic measure.  相似文献   

4.
A general model of a virtual circuit network consisting of a number of servers and a number of traffic classes is considered. A traffic class is identified by the sequence of servers that should be visited and the corresponding service rates before a message (customer) of the class leaves the network. The following cases are distinguished: (1) the messages need nonpreemptive service; (2) the service of a message can be preempted at any time; (3) pipelining of the service in a sequence of servers is allowed; and (4) pipelining is not allowed. All of these cases arise in different transmission switching techniques and scheduling schemes. A fluid model that emerges when both preemption and pipelining are allowed is considered. Scheduling schemes in the fluid model are compared with corresponding ones in the network with nonpreemptive service and no pipelining. The problem of evacuating the network from an initial backlog without further arrival is identified in the fluid model. Based on that, a policy with nearly optimal evacuation time is identified for the store-and-forward case. Finally, scheduling with deadlines is considered and it is shown that in the fluid model, the evacuation problem is equivalent to a linear programming problem. The evacuation times under different work-conserving policies are considered in specific examples  相似文献   

5.
A technique is presented for the derivation of approximate average packet time-delay formulas in a packet-switching store-and-forward network. The approximation technique appropriately incorporates arrival and departure packet interferences with the recent queueing results in a noninterfered communication path. Simulation results, for basic flow configurations, show the approximate timedelay function to constitute a very close lower bound over wide range of traffic intensities. The resulting average time-delay approximation when a Poisson assumption is made, is shown to constitute an upper bound. A procedure for approximate network time-delay analysis is then presented.  相似文献   

6.
We study the delay performance of all-optical packet communication networks configured as ring and bus topologies employing cross-connect switches (or wavelength routers). Under a cross-connect network implementation, a packet experiences no (or minimal) internal queueing delays. Thus, the network can be implemented by high speed all-optical components. We further assume a packet-switched network operation, such as that using a slotted ring or bus access methods. In this case, a packet's delay is known before it is fed into the network. This can be used to determine if a packet must be dropped (when its end-to-end delay requirement is not met) at the time it accesses the network. It also leads to better utilization of network capacity resources. We also derive the delay performance for networks under a store-and-forward network operation. We show these implementations to yield very close average end-to-end packet queueing delay performance. We note that a cross-connect network operation can yield a somewhat higher queueing delay variance levels. However, the mean queueing delay for all traffic flows are the same for a cross-connect network operation (under equal nodal traffic loading), while that in a store-and-forward network increases as the path length increases. For a ring network loaded by a uniform traffic matrix, the queueing delay incurred by 90% of the packets in a cross-connect network may be lower than that experienced in a store-and-forward network. We also study a store-and-forward network operation under a nodal round robin (fair queueing) scheduling policy. We show the variance performance of the packet queueing delay for such a network to be close to that exhibited by a cross-connect (all-optical) network.  相似文献   

7.
This paper deals with the problem of store-and-forward deadlock prevention in store-and-forward networks. The presented solution uses time stamping of all messages in the network, and a nonpreemptable message exchange mechanism. By combining these ideas, a new distributed flow control procedure is derived which guarantees that all messages are delivered to their own destinations, thus avoiding both deadlock and livelock without any message loss. It is shown that some properties of this procedure depend on the policy of the allocation of exchange buffers to nodes. On the one hand, an optimal allocation strategy is presented which results in a maximally optimal deadlock prevention procedure. The procedure is network sizeand topology-independent and allows unrestricted packet routing. On the other hand, the allocation of one exchange buffer per node is discussed, which, even if not optimal, makes the derived deadlock prevention procedure completely independent of network reconfigurations. The last feature is extremely important from the practical point of view and, therefore, such a solution is strongly recommended. When compared to store-and-forward deadlock prevention procedures described so far, which lack some or all of these desirable properties, the procedure presented here behaves favorably. However, it imposes other drawbacks, i.e., the possibility of extra hops as a result of exchange operations. It is argued that this drawback appears rarely in practice, and some strategies which aim at a reduction of it are proposed.  相似文献   

8.
We consider a multiple-access broadcast communication channel which serves a network of terminals, each characterized by a distinct priority level. Higher priority terminals require, on the average, faster access across the communication channel than lower priority ones. We present a terminal priority contentionless access (TPCA) control scheme for sharing such a channel on a conflict-free basis. A ready terminal wishing to access the channel will wait for the channel to become idle, and then defer its access untill after higher priority terminals are given the opportunity to transmit. The model accounts for acquisition delays (including propagation and timearound delay components), idle detection times, multipacket message structures and different arrival intensities at different terminals, Analytical results are derived for the moments and transform of the message waiting-time distribution at each network terminal by relating the waiting times in our model to those in an appropriateM/G/1queue with HOL (head of the line) priorities. Performance curves are presented under varirous system parameter conditions which correspond to applications involving digital radio local distribution systems and metropolitan-area, packet radio, and local-area communicatiton networks.  相似文献   

9.
A TDMA access-control scheme operating under a nonpreemptive message-based priority discipline is considered and analyzed. The moment generating function (MGF) of the message waiting-time is obtained, at an arbitrary station of the network, under the assumptions of a Poisson message arrival stream and random message lengths governed by a general distribution, for each priority class. Using these results, explicit formulas for any moment of the message delay can be obtained. In particular, expressions for the average and standard deviation of the message waiting-time are presented, for the case when short (e.g., interactive) messages have priority over longer (e.g., batch) messages.  相似文献   

10.
The advantages of store-and-forward (S/F) message transfer at the application layer and problems with existing systems are discussed. Current systems suffer from the limitation that if an intermediate message transfer agent (MTA), the entity responsible for storing and forwarding messages, cannot store a message in its entirety due to insufficient buffer space, then the transfer fails. A mechanism is introduced, called message fragmentation, to transfer messages too large to be completely stored on an intermediate MTA automatically and efficiently. In addition to letting a large message pass through the network, fragmentation improves performance in an S/F message-based system. Schemes developed for flow control, deadlock prevention, and buffer management in packet-switched networks are described, and their suitability for use in the message-handling environment is discussed. Several novel schemes for buffer management in the message-handling environment are presented. The EAN X.400-based S/F message-handling system developed at the University of British Columbia is briefly cited as an example  相似文献   

11.
This article deals with store-and-forward deadlock prevention in communication networks. The approach we adopt is that of establishing buffer classes in order to prevent cyclic waiting chains. This type of solutions usually tends to require many buffers. The main contribution is in showing that the number of required buffers can be reduced considerably by employing a hierarchical organization of the network. It proposes a new hierarchical scheme for arbitrary networks, that features a tradeoff between the communication overhead and the buffer requirements of the routing. This tradeoff can be shown to be close to optimal  相似文献   

12.
Stochastic analysis of a slotted FIFO communication channel   总被引:3,自引:0,他引:3  
Messages arrive randomly at one end of a slotted communication channel. They are assigned to (packed in) packets of fixed duration which queue up for transmission in first-in-first-out order; the packets are sent one per time slot. In a stochastic setting, where message durations are also random, we analyze a model which yields statistics on message delays and the number of waiting messages, assuming that the assignment protocol is the well-known next-fit rule of one-dimensional bin packing. A stability condition is obtained as a function of general discrete message-length distributions. As a by-product, we contribute a new result to the literature on the probabilistic analysis of the static next-fit bin-packing rule, viz. the limiting expected bin occupancy for general discrete distributions. Specializations of the results to constant message lengths and to uniform message-length distributions are worked out in detail  相似文献   

13.
We present an exact delay analysis for a packet-switching concentrating system consisting of an arbitrary number of active stations connected in tandem by unidirectional, asynchronous transmission lines, all with identical transmission rate. Packetized messages from exogenous sources enter the system at every station, are handled from station to station in a store-and-forward fashion, and exit at the downstream end; there are no intermediate departures. The service discipline at each station is either FCFS, fixed priority, or alternating priority, the transmission of a packet is not interrupted while in progress. We assume that the packets have identical length (in bits), that the sources are independent and that each source generates batch Poisson traffic. The FCFS case was solved by Kaplan [7] and Shalmon and Kaplan [19]. Here we analyze the priority disciplines. For the packets and messages from each source, we obtain the steady-state moment generating functions for their end-to-end waiting times. We offer simple formulas for the corresponding mean waiting times, and show that the Poisson approximation for departures is unreliable. The analysis for all three disciplines generalizes to the case where the line capacities are nonincreasing in the direction of the flow, and for the priority disciplines, it further generalizes to a concentrating tree network, and to an arrival process somewhat more general than batch Poisson.  相似文献   

14.
The last few years have witnessed the intensive growth of computer communication networks. The need for nationwide and multination computer communication systems brought about the development of packet-switching networks such as the ARPANET. In this paper we examine a model for computer-to-computer communication via a satellite link. In each network, a single node, the satellite communication concentrator (SCC), manages the flow of information between the terminals in the network and the satellite link. The SCC buffers messages from the terminals and retransmits them over the satellite channel. Buffer space claimed by a message is made free only after the SCC receives an acknowledgment from the receiving network; transmission errors cause the buffer to retransmit the message. The statistical behavior of such a system is considered.  相似文献   

15.
In this paper we study a message resequencing problem in a store-and-forward computer network where messages may go out of order while traversing logical channels. The logical channels are assumed to consist of multiple physical links which may be of different capacities. A message is dispatched to the fastest available link. Resequencing methods suggested in the literature [3] (resequencing at the channel level and resequencing at the virtual circuit level) are investigated for this link selection rule. The analysis is done on a two-node network connected by multiple links. The source node together with the set of outgoing links are modeled as anM/M/mqueue with servers of different rates. The resequencing delay distribution and the average resequencing delay are derived. On multihop networks, the effect of message length, link numbers, link service rates, and the resequencing methods on resequeucing delay are investigated by simulation.  相似文献   

16.
In this paper, we analyse the message waiting times in a local area network (LAN) that uses the demand‐priority access method. This is a priority‐based round‐robin arbitration method where the central controller (the repeater) polls its connected ports to determine which have transmission requests pending and the requests' priority classes. We model it as a 2‐priority M/G/1 queue with multiple server vacations and switchover time between service periods. The service discipline is nonpreemptive and the length of the switchover time is dependent upon the priority class of the preceding message served as well as that of the message to be served next. We provide an approximate analysis for the waiting times of both message classes and derive expressions for the Laplace–Stieltjes transforms (LST) of the stationary distributions and the mean waiting times. We conclude with numerical and simulation results to show the applicability and accuracy of the analytical model. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

17.
For communication services over transparent optical paths, specified in terms of bit rate and bit error probability, the maximum length is determined by detrimental physical effects like dispersion, crosstalk, noise or nonlinearities. Therefore, fiber-optic wide area networks employing wavelength division multiplexing (WDM) and optical bypassing have to be divided into several small transparent partitions. In this paper a scheme accounting for sufficiently short optical paths is introduced which is called Modular WDM-Gridconnect. In this approach the burden of interconnecting different network partitions (i.e., by using an overlay transport system as an interface) is spread over almost all nodes of the network. This property is referred to as distributed interfacing. The network can be extended step-by-step in a modular fashion adding no more than two nodes to the coverage area at a time. The advantageous properties of such an arrangement as compared to a straightforward partitioning are characterized by the cumulative transit traffic and the path lengths in a Modular WDM-Gridconnect cluster.  相似文献   

18.
This paper examines the relationship between range and performance of meteor burst communication systems. Three performance measures are considered: average duty cycle, average throughput for a system which transmits fixed length message packets at a fixed data rate, and waiting time to deliver a fixed length message. In particular, we examine the range dependence of the three measures at ranges of less than 400 km. It will he demonstrated that communication connectivity can be maintained at short ranges by a combination of efficient protocol and proper antenna pattern designs.  相似文献   

19.
Coding for skew-tolerant parallel asynchronous communications   总被引:1,自引:0,他引:1  
A communication channel consisting of several subchannels transmitting simultaneously and asynchronously is considered, an example being a board with several chips, where the subchannels are wires connecting the chips and differences in the lengths of the wires can result in asynchronous reception. A scheme that allows transmission without an acknowledgment of the message, therefore permitting pipelined communication and providing a higher bandwidth, is described. The scheme allows a certain number of transitions from a second message to arrive before reception of the current message has been completed, a condition called skew. Necessary and sufficient conditions for codes that can detect skew as well as for codes that are skew-tolerant, i.e. can correct the skew and allow continuous operation, are derived. Codes that satisfy the necessary and sufficient conditions are constructed, their optimality is studied, and efficient decoding algorithms are devised. Potential applications of the scheme are in on-chip, on-board, and board to board communications, enabling much higher communication bandwidth  相似文献   

20.
This paper describes the construction of loop-free buffer graphs which avoid four types of buffer deadlocks in store-and-forward networks. 1) Progeny deadlock, where original messages spawnother ones, and buffer contention occurs between the original and progeny messages. This occurs when positive or negative acknowledgments are created, e.g., if messages reverse direction after encountering a path failure. 2) Copy-release deadlock, where a message copy is stored at the source node and the buffer is not released until an acknowledgment is received from the destination node. Buffer contention may arise among the original messages, stored copies, and acknowledgments. 3) Pacing deadlock, where a local flow control protocol is used between a network node and attached terminals. Buffer contention may arise between the message flows into and out of the terminal, preventing the transmission of go-ahead commands. 4) Reassembly deadlock, whereby reassembly of packetized messages at the destination node cannot be completed. The solution presented here has the novel features of not requiring preallocation of reassembly buffers before transmission of multiple packets of a multipacket message, and not requiring dedication of buffer space at intermediate nodes for individual messages. These schemes are believed to have modest buffer requirements at each node, and if adequate buffer pools are provided, will incur negligible performance degradations under normal conditions, with overhead increasing under heavy buffer usage when deadlock is near.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号