首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we present an analytic model to estimate the performance of cut-through buffered banyan networks with finite buffer size. Two conflict resolution policies are considered in order to resolve conflicts when two packets are destined to the same output link, and their performances are compared. Our analytic model enables analysis of the networks in which buffers are unevenly distributed, i.e., each stage has a different buffer size. It is shown that if buffers are properly distributed in the network, higher throughput and lower delay are possible, although the improvement is considered small. Finally, in order to validate our model, some analytic results are compared to simulation.  相似文献   

2.
在基于正交频分多址(OFDMA)的中继系统中,为了满足用户的QoS要求,保证系统的吞吐量最大的同时又保证用户公平性,给出了一种面向业务服务质量的资源分配算法。先根据用户在队列中的等待时延和用户对速率的需求引入时延优先级因子和速率优先级因子,以此计算用户的优先级。然后分别在回程链路和接入链路进行动态资源调度与分配。仿真结果表明,新算法能够兼顾中继用户和直传用户的性能,实现低丢包率、较好地满足 GBR需求,实现较高的系统吞吐量和公平性。  相似文献   

3.
孙卫  黄金科 《计算机科学》2017,44(Z6):288-293, 332
在分布式认知无线电网络中,动态资源利用不足和中心控制单元的缺失使其MAC层协议的设计面临很多挑战。针对认知无线电网络的特点,提出了一种新的MAC协议,该协议通过在信道预约阶段优先考虑对时延要求较高的应用,来保证网络对这类应用的QoS;同时还解决了认知无线电网络中频谱的利用率低和隐藏终端两个热点问题。为分析研究该协议的性能,首先提出了一种新的分析模型。然后将这种新的MAC协议与两种典型的MAC协议进行了仿真对比,结果表明该协议提高了网络的吞吐量。最后通过数值分析和仿真证实了本协议设计简单高效,具有较高的频谱利用率,不但满足了时延敏感性应用的QoS需求,而且还能有效地解决多信道隐藏终端的问题。  相似文献   

4.
智能设备存在着存储能力以及计算能力不足的问题,导致无法满足计算密集型和延迟敏感型应用的服务质量要求。边缘计算和云计算被认为是解决智能设备局限性的有效方法。为了有效利用边云资源,并在延迟和服务失败概率方面提供良好的服务质量,首先提出了一种三层计算系统框架,然后考虑到边缘服务器的异构性和任务的延迟敏感性,在边缘层提出了一种高效的资源调度策略。三层计算系统框架可以根据应用程序的延迟敏感性提供计算资源和传输时延,保证了边缘资源的有效利用以及任务的实时性。仿真结果验证了所提资源调度策略的有效性,并表明该调度算法优于现有传统方法。  相似文献   

5.
王妍  马秀荣  单云龙 《计算机应用》2019,39(5):1429-1433
针对长期演进(LTE)移动通信系统下行链路传输中多用户的实时(RT)与非实时(NRT)业务传输性能需求问题,提出一种基于用户加权平均时延的改进型的最大加权延时优先(MLWDF)资源调度算法。该算法在考虑信道感知与用户服务质量(QoS)感知的基础上引入反映用户缓冲区状态的加权平均时延因子,该因子通过用户缓冲区中待传输数据与已发送数据的平均时延均衡得到,使具有较大时延和业务量的实时业务优先调度,提升了用户的性能体验。理论分析与链路仿真表明,提出算法在保证各业务时延及公平性的基础上,提升了实时业务的QoS性能,在用户数量达到50的条件下,对比MLWDF算法实时业务的丢包率降低了53.2%,其用户平均吞吐量提升了44.7%,虽牺牲了非实时业务的吞吐量,但仍优于VT-MLWDF算法。实验结果表明,所提算法在多用户多业务传输条件下提升了实时业务的传输性能,并在QoS性能上明显优于对比算法。  相似文献   

6.
陈琳  张富强 《软件学报》2016,27(S2):254-260
随着数据中心网络规模的迅速增长,网络带宽利用率低下导致的网络拥塞问题日益突出,通过负载均衡提高数据中心网络链路带宽利用率和吞吐量成为了研究热点.如何结合流量特征、链路状态和应用需求进行流量的合理调度,是实现网络链路负载均衡的关键.针对数据中心突发性强、带宽占用率高的大象流调度问题,提出一种面向SDN数据中心网络最大概率路径流量调度算法,算法首先计算出满足待调度流带宽需求所有路径,然后计算流带宽与路径最小链路带宽之间的带宽比,结合所有路径的带宽比为每一条路径计算路径概率,最后利用概率机制选择路径.算法不仅考虑了流带宽需求和链路带宽使用情况,而且全局地考虑了流调度和链路带宽碎片问题.实验结果表明,最大概率路径调度算法能够有效地缓解网络拥塞,提高带宽利用率和吞吐量,减少网络延迟,从而提高数据中心的整体网络性能和服务质量.  相似文献   

7.
In future computer system design, I/O systems will have to support continuous media such as video and audio, whose system demands are different from those of data such as text. Multimedia computing requires us to focus on designing I/O systems that can handle real-time demands. Video- and audio-stream playback and teleconferencing are real-time applications with different I/O demands. We primarily consider playback applications which require guaranteed real-time I/O throughput. In a multimedia server, different service phases of a real-time request are disk, small computer systems interface (SCSI) bus, and processor scheduling. Additional service might be needed if the request must be satisfied across a local area network. We restrict ourselves to the support provided at the server, with special emphasis on two service phases: disk scheduling and SCSI bus contention. When requests have to be satisfied within deadlines, traditional real-time systems use scheduling algorithms such as earliest deadline first (EDF) and least slack time first. However, EDF makes the assumption that disks are preemptable, and the seek-time overheads of its strict real-time scheduling result in poor disk utilization. We can provide the constant data rate necessary for real-time requests in various ways that require trade-offs. We analyze how trade-offs that involve buffer space affect the performance of scheduling policies. We also show that deferred deadlines, which increase buffer requirements, improve system performance significantly  相似文献   

8.
A practical join processing strategy that allows effective utilization of arbitrary degrees of parallelism in both the I/O subsystem and join processing subsystems is presented. Analytic bounds on the minimum execution time, minimum number of processors, and processor utilization are presented along with bounds on the execution time, given a fixed number of processors. These bounds assume that sufficient buffers are available. An analytic lower bound on buffer requirements as well as a practical heuristic for use in limited buffer environments are also presented. A sampling of corroborative simulation results are included  相似文献   

9.
This study investigates the buffer allocation strategy of a flow-shop-type production system that possesses a given total amount of buffers and finite buffer capacity for each workstation as well as general interarrival and service times in order to optimize such system performances as minimizing work-in-process, cycle time and blocking probability, maximizing throughput, or their combinations. In theory, the buffer allocation problem is in itself a difficult NP-hard combinatorial optimization problem, it is made even more difficult by the fact that the objective function is not obtainable in closed form for interrelating the integer decision variables (i.e., buffer sizes) and the performance measures of the system. Therefore, the purpose of this paper is to present an effective design methodology for buffer allocation in the production system. Our design methodology uses a dynamic programming process along with the embedded approximate analytic procedure for computing system performance measures under a certain allocation strategy. Numerical experiments show that our design methodology can quickly and quite precisely seek out the optimal or sub-optimal allocation strategy for most production system patterns.Scope and purposeBuffer allocation is an important, yet intriguingly difficult issue in physical layout and location planning for production systems with finite floor space. Adequate allocation and placement of available buffers among workstations could help to reduce work-in-process, alleviate production system's congestion and even blocking, and smooth products manufacturing flow. In view of the problem complexity, we focus on flow-shop-type production systems with general arrival and service patterns as well as finite buffer capacity. The flow-shop-type lines, which usually involve with product-based layout, play an important role in mass production type of manufacturing process organization such as transfer line, batch flow line, etc. The purpose of this paper is to present a design methodology with heuristic search and imbedded analytic algorithm of system performances for obtaining the optimal or sub-optimal buffer allocation strategy. Successful use of this design methodology would improve the production efficiency and effectiveness of flow-shop-type production systems.  相似文献   

10.
In this study, we present a new analytic model for evaluating average end-to-end delay in IEEE 802.11 multihop wireless networks. Our model gives closed expressions for the end-to-end delay in function of arrivals and service time patterns. Each node is modelled as a G/G/1/K queue from which we can derive expressions for service time via queueing theory. By combining this delay evaluation with different admission controls, we design a protocol called DEAN (Delay Estimation in Ad hoc Networks). DEAN is able to provide delay guarantees for quality of service applications in function of the application level requirements. Through extensive simulations, we compare the performance evaluation of DEAN with other approaches.  相似文献   

11.
The increasing demand for real-time applications in Wireless Sensor Networks (WSNs) has made the Quality of Service (QoS) based communication protocols an interesting and hot research topic. Satisfying Quality of Service (QoS) requirements (e.g. bandwidth and delay constraints) for the different QoS based applications of WSNs raises significant challenges. More precisely, the networking protocols need to cope up with energy constraints, while providing precise QoS guarantee. Therefore, enabling QoS applications in sensor networks requires energy and QoS awareness in different layers of the protocol stack. In many of these applications (such as multimedia applications, or real-time and mission critical applications), the network traffic is mixed of delay sensitive and delay tolerant traffic. Hence, QoS routing becomes an important issue. In this paper, we propose an Energy Efficient and QoS aware multipath routing protocol (abbreviated shortly as EQSR) that maximizes the network lifetime through balancing energy consumption across multiple nodes, uses the concept of service differentiation to allow delay sensitive traffic to reach the sink node within an acceptable delay, reduces the end to end delay through spreading out the traffic across multiple paths, and increases the throughput through introducing data redundancy. EQSR uses the residual energy, node available buffer size, and Signal-to-Noise Ratio (SNR) to predict the best next hop through the paths construction phase. Based on the concept of service differentiation, EQSR protocol employs a queuing model to handle both real-time and non-real-time traffic.  相似文献   

12.
The Computer Networks Laboratory at the University of Virginia has developed a real-time messaging service that runs on IBM PCs and PC/ATs when interconnected with a Proteon ProNET-10 token ring local area network. The system is a prototype for a real-time communications network to be used aboard ships. The system conforms to the IEEE 802.2 logical link control standard for type I (connectionless, or datagram) service, with an option for acknowledged datagrams. The application environment required substantial network throughput and bounded message delay. Thus, the development philosophy was to emphasize performance initially and to offer only primitive user services. After providing and measuring the performance of a basic datagram service, the intent is to add additional user services one at a time and to retain only those which the user can ‘afford’ in terms of their impact on throughput, delay, and CPU utilization. The current system is programmed in C. The user interface is a set of C procedure calls that initialize tables, reserve buffer space, send and receive messages, and report network status. The system is now operational, and initial performance measurements are complete. Using this system, an individual PC can transmit or receive approximately 200 short (about 100 bytes) messages per second, and the PC/AT operates at nearly 500 short messages per second.  相似文献   

13.
Design of servers to meet the quality of service (QoS) requirements of interactive video-on-demand (VOD) systems is challenging. Recognizing the increasing use of these systems in a wide range of applications, as well as the stringent service demands expected from them, several design alternatives have been proposed to improve server throughput. A buffer management technique, called interval caching, is one such solution which exploits the temporal locality of requests to the same movie and tries to serve requests from the cache, thereby enhancing system throughput.In this paper, we present a comprehensive mathematical model for analyzing the performance of interactive video servers that use interval caching. The model takes into account the representative workload parameters of interactive servers employing interval caching and calculates the expected number of cached streams as an indication of the improvement in server capacity due to interval caching. Especially, user interactions, which sensitively affect the performance of interval caching, are realistically reflected in our model for an accurate analysis. A statistical admission control technique has also been developed based on this model. Using this model as a design tool, we apply the model to measure the impact of different VCR operations on client requests and rejection probability, as well as the effect of cache size.  相似文献   

14.
Critical services in a telecommunication network should be continuously provided even when undesirable events like sabotage, natural disasters, or network failures happen. It is essential to provide virtual connections between peering nodes with certain performance guarantees such as minimum throughput, maximum delay or loss. The design, construction and management of virtual connections, network infrastructures and service platforms aim at meeting such requirements.In this paper we consider the network’s ability to survive major and minor failures in network infrastructure and service platforms that are caused by undesired events that might be external or internal. Survive means that the services provided comply with the requirement also in presence of failures. The network survivability is quantified as defined by the ANSI T1A1.2 committee which is the transient performance from the instant an undesirable event occurs until steady state with an acceptable performance level is attained.The assessment of the survivability of a network with virtual connections exposed to link or node failures is addressed in this paper. We have developed both simulation and analytic models to cross-validate our assumptions. In order to avoid state space explosion while addressing large networks we decompose our models first in space by studying the nodes independently and then in time by decoupling our analytic performance and recovery models which gives us a closed form solution. The modeling approaches are applied to both small and real-sized network examples. Three different scenarios have been defined, including single link failure, hurricane disaster, and instabilities in a large block of the system (transient common failure).The results show very good correspondence between the transient loss and delay performance in our simulations and in the analytic approximations.  相似文献   

15.
就同时包含了有线链路和无线链路的异构网络上的实时应用,提出了一种满足其端到端服务质量(QoS)需求的无线网络MAC(media access control)层调度算法(real-time cross-layer scheduling algorithm for real-time application,简称RTCLA).该算法采用跨层的思想,结合了自适应调制编码(adaptive modulation and coding,简称AMC)技术和选择性自动请求重传(selective repeat-automatic repeat request,简称SR-ARQ)技术,在满足应用的系统误包率(packet error rate,简称PER)要求、尽可能减少基站中等待超时分组数目的前提下,提高系统吞吐性能和频谱利用率.通过仿真来验证算法分组超时率、平均系统有效吞吐率和公平性3个方面的性能,并与改进的比例公平算法(modifiedpro portional fair,简称MPF)、最早到期优先(earliest deadline first,简称EDF)和改进的最大加权延时优先(modified largest weighted delay first,简称M-LWDF)等3种广泛使用的算法进行了比较.仿真结果还表明,综合考虑实时应用的严格时延要求和无线网络资源稀缺以及信道的时变特性,RTCLA更适合于对时延敏感的实时应用,尤其是分组超时率性能方面表现突出.此外,仿真结果还表明,RTCLA在稳定性方面的表现与其他3种算法基本相同.  相似文献   

16.
In order to produce service compositions, modern web applications now combine both in-house and third-party web services. Therefore, their performance depends on the performance of the services that they integrate. At early stages, it may be hard to quantify the performance demanded from the services to meet the requirements of the application, as some services may not be available or may not provide performance guarantees. The authors present several algorithms that compute the required performance for each service from a model of a service composition at an early stage of development. This is also helpful when testing service compositions and selecting candidate web services, enabling performance-driven recommendation systems for web services that could be integrated into service discovery. Domain experts can annotate the model to include partial knowledge on the expected performance of the services. We develop a throughput computation algorithm and two time limit computation algorithms operating on such a model: a baseline algorithm, based on linear programming, and an optimised graph-based algorithm. We conduct theoretical and empirical evaluations of their performance and capabilities on a large sample of models of several classes. Results show that the algorithms can provide an estimation of the performance required by each service, and that the throughput computation algorithm and the graph-based time limit computation algorithm show good performance even in models with many paths.  相似文献   

17.
In the past, much emphasis has been given to the data throughput of VOD servers. In Interactive Video-on-Demand (IVOD) applications, such as digital libraries, service availability and response times are more visible to the user than the underlying data throughput. Data throughput is a measure of how efficiently resources are utilized. Higher throughput may be achieved at the expense of deteriorated user-perceived performance metrics such as probability of admission and queuing delay prior to admission. In this paper, we propose and evaluate a number of strategies to sequence the admission of pending video requests. Under different request arrival rates and buffer capacities, we measure the probability of admission, queueing delay and data throughput of each strategy. Results of our experiments show that simple hybrid strategies can improve the number of admitted requests and reduce the queuing time, without jeopardizing the data throughput. The techniques we propose are independent of the underlying disk scheduling techniques used. So, they can be employed to improve the user-perceived performance of VOD servers, in general.  相似文献   

18.
多阶段服务模型是一种支持高并发、高吞吐的事件驱动服务应用架构,为使其更好地适应当前Internet上大部分应用提供区分等级服务的现状,提出了一种为该模型增加对带优先级请求支持的方法;定义了动态优先级,改进了随机早期检测算法以控制不同优先级事件的入队,并通过优先级动态提升防止低优先级事件被“饿死”。实验结果表明,改进后的模型在保持系统良好性能的基础上,满足了不同优先级请求的实时性和吞吐率需求。  相似文献   

19.
流媒体技术应用越来越广泛,但数据传输中的延迟、抖动,影响了媒体流播放质量。如何提供保证性能的流媒体服务成为推广流媒体技术的关键。提出了最小延迟算法,可以提供高的信道利用率及高的目的端缓冲区数据吞吐量,提高媒体流播放质量。还提出了最小聚类延迟算法作为改进,进一步优化媒体流整体播放性能。上述两种算法在流媒体技术中有一定的应用推广价值。  相似文献   

20.
In this paper, we model, analyze and evaluate the performance of a 2-class priority architecture for finite-buffered multistage interconnection networks (MINs). The MIN operation modelling is based on a state diagram, which includes the possible MIN states, transitions and conditions under which each transition occurs. Equations expressing state and transition probabilities are subsequently given, providing a formal model for evaluating the MIN's performance. The proposed architecture's performance is subsequently analyzed using simulations; operational parameters, including buffer length, MIN size, offered load and ratios of high priority packets which are varied across experiments to gain insight on how each parameter affects the overall MIN performance. The 2-class priority MIN performance is compared against the performance of single priority MINs, detailing the performance gains and losses for packets of different priorities. Performance is assessed by means of the two most commonly used factors, namely packet throughput and packet delay, while a performance indicator combining both individual factors is introduced, computed and discussed. The findings of this study can be used by network and interconnection system designers in order to deliver efficient systems while minimizing the overall cost. The performance evaluation model can also be applied to other network types, providing the necessary data for network designers to select optimal values for network operation parameters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号