首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
潘锋  高航 《信息技术》2004,28(9):51-53
网络视频电话的瓶颈在于基础网络带宽限制,为解决此问题,网络视频电话普遍采用了缓冲预取策略。提出了一种计算缓冲区个数的方法,利用本文推论可以计算网络在正常或阻塞情况下应开辟的预取缓冲数目,以改善网络视频电话的播放质量。  相似文献   

2.
Shared buffer switches consist of a memory pool completely shared among output ports of a switch. Shared buffer switches achieve low packet loss performance as buffer space is allocated in a flexible manner. However, this type of buffered switches suffers from high packet losses when the input traffic is imbalanced and bursty. Heavily loaded output ports dominate the usage of shared memory and lightly loaded ports cannot have access to these buffers. To regulate the lengths of very active queues and avoid performance degradations, threshold‐based dynamic buffer management policy, decay function threshold, is proposed in this paper. Decay function threshold is a per‐queue threshold scheme that uses a tailored threshold for each output port queue. This scheme suggests that buffer space occupied by an output port decays as the queue size of this port increases and/or empty buffer space decreases. Results have shown that decay function threshold policy is as good as well‐known dynamic thresholds scheme, and more robust when multicast traffic is used. The main advantage of using this policy is that besides best‐effort traffic it provides support to quality of service (QoS) traffic by using an integrated buffer management and scheduling framework. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

3.
The reconfiguration management scheme changes a logical topology in response to changing traffic patterns in the higher layer of a network or the congestion level on the logical topology. In this paper, we formulate a reconfiguration scheme with a shared buffer‐constrained cost model based on required quality‐of‐service (QoS) constraints, reconfiguration penalty cost, and buffer gain cost through traffic aggregation. The proposed scheme maximizes the derived expected reward‐cost function as well as guarantees the required flow's QoS. Simulation results show that our reconfiguration scheme significantly outperforms the conventional one, while the required physical resources are limited.  相似文献   

4.
Traditional traffic identification methods based on well‐known port numbers are not appropriate for the identification of new types of Internet applications. This paper proposes a new method to identify current Internet traffic, which is a preliminary but essential step toward traffic characterization. We categorized most current network‐based applications into several classes according to their traffic patterns. Then, using this categorization, we developed a flow grouping method that determines the application name of traffic flows. We have incorporated our method into NG‐MON, a traffic analysis system, to analyze Internet traffic between our enterprise network and the Internet, and characterized all the traffic according to their application types.  相似文献   

5.
The issue of router buffer sizing is still open and significant. Previous work either considers open-loop traffic or only analyzes persistent TCP flows. This paper differs in two ways. First, it considers the more realistic case of nonpersistent TCP flows with heavy-tailed size distribution. Second, instead of only looking at link metrics, it focuses on the impact of buffer sizing on TCP performance. Specifically, our goal is to find the buffer size that maximizes the average per-flow TCP throughput. Through a combination of testbed experiments, simulation, and analysis, we reach the following conclusions. The output/input capacity ratio at a network link largely determines the required buffer size. If that ratio is larger than 1, the loss rate drops exponentially with the buffer size and the optimal buffer size is close to 0. Otherwise, if the output/input capacity ratio is lower than 1, the loss rate follows a power-law reduction with the buffer size and significant buffering is needed, especially with TCP flows that are in congestion avoidance. Smaller transfers, which are mostly in slow-start, require significantly smaller buffers. We conclude by revisiting the ongoing debate on ldquosmall versus largerdquo buffers from a new perspective.  相似文献   

6.
全光缓存器的研究进展   总被引:2,自引:0,他引:2  
综述了在全光传送网向以包交换为基础的全光交换网过渡的关键技术之一:全光缓存器的研究进展,介绍了各种全光缓存器的原理、性能,并讨论了其发展方向.  相似文献   

7.
 在基于机会式网络编码的的无线单播应用中,每个节点需要缓存一些数据包,用来对编码数据包进行解码,该缓存称作侦听缓存.本文针对"X"型拓扑分析了传统的基于先入先出的侦听管理策略,理论结果表明侦听缓存有限时吞吐量随侦听缓存的减小而迅速降低.为此,提出了一种基于尽力服务的侦听管理策略,提高了侦听缓存中数据包被用作解码的概率,进而提高了系统吞吐量.为减少无用数据包被缓存的概率,提出了一种基于历史信息的侦听管理策略,可有效减少干扰流对系统吞吐量的影响.  相似文献   

8.
The design of a copy network is presented for use in an ATM (asynchronous transfer mode) switch supporting BISDN (broadband integrated services digital network) traffic. Inherent traffic characteristics of BISDN services require ATM switches to handle bursty traffic with multicast connections. In typical ATM switch designs a copy network is used to replicate multicast cells before being forwarded to a point-to-point routeing network. In such designs, a single multicast cell enters the switch and is replicated once for each multicast connection. Each copy is forwarded to the routeing network with a unique destination address and is routed to the appropriate output port. Non-blocking copy networks permit multiple cells to be multicasted at once, up to the number of outputs of the copy network. Another critical feature of ATM switch design is the location of buffers for the temporary storage of transmitted cells. Buffering is required when multiple cells require a common switch resource for transmission. Typically, one cell is granted the resource and is transmitted while the remaining cells are buffered. Current switch designs associate discrete buffers with individual switch resources. Discrete buffering is not efficient for bursty traffic as traffic bursts can overflow individual switch buffers and result in dropped cells, while other buffers are under-used. A new non-blocking copy network is presented in this paper with a shared-memory input buffer. Blocked cells from any switch input are stored in a single shared input buffer. The copy network consists of three banyan networks and shared-memory queues. The design is scalable for large numbers of inputs due to low hardware complexity, O (N log2 N), and distributed operation and control. It is shown in a simulation study that a switch incorporating the shared-memory copy network has increased throughput and lower buffer requirements to maintain low packet loss probability when compared to a switch with a discrete buffer copy network.  相似文献   

9.
流媒体同步对端到端时延和时延抖动提出了确定的要求,而终端抖动缓存一方面能消除时延抖动的影响,一方面却增加了端到端时延,流媒体同步保障对网络时延的要求不明确。论文从概率保障流媒体同步的角度,确定了保障流媒体同步的抖动缓存容量范围,提出了流媒体同步网络保障的充分条件,针对基于Internet VoIP(Voice over IP)业务的实际网络测试结果,给出了应用流媒体同步网络保障充分条件进行同步保障评价的应用实例并验证了其正确性。  相似文献   

10.
时钟延时及偏差最小化的缓冲器插入新算法   总被引:2,自引:0,他引:2  
曾璇  周丽丽  黄晟  周电  李威 《电子学报》2001,29(11):1458-1462
本文提出了以最小时钟延时和时钟偏差为目标的缓冲器插入新算法.基于Elmore延时模型,我们得到相邻缓冲器间的延时是缓冲器在时钟树中位置的凸函数.当缓冲器布局使所有缓冲器间延时函数具有相同导数值时,时钟延时达到最小;当所有源到各接收端点路径的延时函数值相等时,时钟偏差达到最小.对一棵给定的时钟树,我们在所有从源点到各接收端点路径上插入相同层数的缓冲器,通过优化缓冲器的位置实现时钟延时最小;通过调整缓冲器尺寸和增加缓冲器层数,实现时钟偏差最小.  相似文献   

11.
With the exponential growth of Internet traffic, the energy consumption issue of core networks is increasingly becoming critical. Today's core networks are highly underutilized most of the time because of the over‐provisioning and redundancy dimensioning, which results in severe energy inefficiency. In previous work, many non‐deterministic polynomial‐time hard mathematics formulation models have been proposed to minimize the energy consumption of core networks. However, effective heuristics are needed to solve these models in medium/large‐size networks. This work studies the energy‐minimized routing and virtual topology design problem of the power‐hungry Internet protocol (IP) layer in core networks, aiming to achieve an energy‐proportional IP layer by exploiting the variation of traffic with hours to reconfigure virtual topology and reroute traffic. We formulate energy‐minimized routing and virtual topology design as an Integer linear programming problem and propose a LR algorithm, a heuristic based on the Lagrangian relaxation, to solve this problem in a polynomial‐time. The simulation results indicate that the LR algorithm outperforms the best previous algorithm and can achieve a near energy‐proportional IP layer with significant power saving. Furthermore, a detailed analysis of simulation results is conducted, which suggests a design principle of network equipment to facilitate the power saving. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
在光缓存器中节省光延迟线的技术   总被引:1,自引:0,他引:1  
使用光纤延迟线(FDL)是构造全光缓存器的基本手段.但在当前提出的全光缓存器设计中,FDL利用率即缓存容量与所使用的FDL总长度之比是相当低的(通常为2/N,其中N为输入端口数).为了解决这个问题,提出了两种新的FDL组织形式:线性结构和树状结构,用于代替传统设计方法中功能相同的FDL模块.这种方法可以应用在多种光缓存器设计中,将FDL利用率提升至一个与N无关的常量或是1/log2N,节省效果是十分显著的.  相似文献   

13.
红外焦平面读出电路(IRFPA ROIC)主要用于焦平面阵列与后续信号处理之间的通信.文章提出了一种用于红外焦平面读出电路的缓冲器模块,包括列缓冲器、高性能的输出缓冲器以及相应的偏置电路.缓冲器均采用单位增益放大器结构,通过放大器的优化设计可实现对不同负载的有效驱动且静态功耗较低.该缓冲器模块用于一款640×512面阵、30μm中心距的中波红外焦平面读出电路,采用CSMC 0.5μm DPTM工艺进行流片加工.仿真结果表明,列缓冲器的开环增益为40.00 dB,单位增益带宽为48.17 MHz(10 pF).输出缓冲器可实现轨到轨的输入,开环增益为39.68 dB,单位增益带宽为46.08 MHz,读出速率高达20 MHz,功耗为16.02 mW(25 pF//5.1 kΩ).该模块输入端拉出的测试管脚可在焦平面读出电路的晶圆测试中帮助验证芯片功能.通过调节测试端口,测试结果与仿真结果大体一致,验证了该缓冲器模块的设计可行.  相似文献   

14.
Wireless sensor network comprises billions of nodes that work collaboratively, gather data, and transmit to the sink. “Energy hole” or “hotspot” problem is a phenomenon in which nodes near to the sink die prematurely, which causes the network partition. This is because of the imbalance of the consumption of energy by the nodes in wireless sensor networks. This decreases the network's lifetime. Unequal clustering is a technique to cope up with this issue. In this paper, an algorithm, “fuzzy‐based unequal clustering algorithm,” is proposed to prolong the lifetime of the network. This protocol forms unequal clusters. This is to balance the energy consumption. Cluster head selection is done through fuzzy logic approach. Input variables are the distance to base station, residual energy, and density. Competition radius and rank are the two output fuzzy variables. Mamdani method is employed for fuzzy inference. The protocol is compared with well‐known algorithms, like low‐energy adaptive clustering hierarchy, energy‐aware unequal clustering fuzzy, multi‐objective fuzzy clustering algorithm, and fuzzy‐based unequal clustering under different network scenarios. In all the scenarios, the proposed protocol performs better. It extends the lifetime of the network as compared with its counterparts.  相似文献   

15.
The Ethernet network is more and more used to interconnect industrial devices. The objective of this paper is to study the performances of such a network to support real‐time communications. For this, first we propose a general representation to model a switched Ethernet network by using a sequence of elementary components such as buffers, multiplexers, etc. Second, we aggregate the individual temporal properties of each component given in the Cruz' survey to obtain a global formula enabling one to calculate the maximum end‐to‐end delay for any industrial communication scenarios. Finally, we deduce the limits of the switched Ethernet network regarding the number of input/output cards connected to the network and to the sizes of periodic and aperiodic messages. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

16.
The ever‐increasing transmission requirements of quality of service (QoS)‐sensitive applications, especially real‐time multimedia applications, can hardly be met by the single path routing protocols. Multipath transmission mechanism is a feasible approach to provide QoS for various applications. On the basis of the general framework of multipath transport system based on application‐level relay, we present a relay path allocation scheme, whose goal is to select suitable relay paths, while balancing the overlay traffic among the different domains and relayers. With the application‐layer traffic optimization service under the standardization within the Internet Engineering Task Force (IETF), the controller has the topology‐aware ability to allocate relay paths with excellent routing performance. To further develop the universality of our method, the controller perceives transmission performance of relay overlay network through relayers' performance detection processes and, thus, has the application‐aware ability to allocate relay paths with excellent transmission performance for different applications by consulting application‐specific transmission metrics. Simulation results demonstrate that the proposed relay path allocation algorithm performs well in allocating superior relay paths and can balance the distribution of overlay traffic across domains in different network situations.  相似文献   

17.
Video streaming has emerged as a killer application in today's Internet, delivering a tremendous amount of media contents to millions of users at any given time. Such a heavy traffic load demands an effective routing method. In this paper, an effective routing method, named GA‐SDN, is developed based on software defined network (SDN) technique. To facilitate the researchers in this field to evaluate the video delivery quality over SDN, an evaluation framework and its associated source codes are provided. The framework integrates the H.264 Scalable Video coding streaming Evaluation Framework (SVEF) with the Mininet emulator. Through this framework, video processing researchers can evaluate their proposed coding algorithms in an SDN‐enabled network emulator, while network operators or executives can evaluate the impact of real video streams on the developing network architectures or protocols. Experiment results demonstrate the usefulness of myEvalSVC_SDN and prove that GA‐SDN outperforms traditional Bellman‐Ford routing algorithm in terms of packet drop rate, throughput, and average peak signal‐to‐noise ratio.  相似文献   

18.
Modern switches and routers require massive storage space to buffer packets. This becomes more significant as link speed increases and switch size grows. From the memory technology perspective, while DRAM is a good choice to meet capacity requirement, the access time causes problems for high‐speed applications. On the other hand, though SRAM is faster, it is more costly and does not have high storage density. The SRAM/DRAM hybrid architecture provides a good solution to meet both capacity and speed requirements. From the switch design and network traffic perspective, to minimize packet loss, the buffering space allocated for each switch port is normally based on the worst‐case scenario, which is usually huge. However, under normal traffic load conditions, the buffer utilization for such configuration is very low. Therefore, we propose a reconfigurable buffer‐sharing scheme that can dynamically adjust the buffering space for each port according to the traffic patterns and buffer saturation status. The target is to achieve high performance and improve buffer utilization, while not posing much constraint on the buffer speed. In this paper, we study the performance of the proposed buffer‐sharing scheme by both a numerical model and extensive simulations under uniform and non‐uniform traffic conditions. We also present the architecture design and VLSI implementation of the proposed reconfigurable shared buffer using the 0.18 µm CMOS technology. Our results manifest that the proposed architecture can always achieve high performance and provide much flexibility for the high‐speed packet switches to adapt to various traffic patterns. Furthermore, it can be easily integrated into the functionality of port controllers of modern switches and routers. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

19.
对双总线结构工业控制计算机网,给出了其Petri网和高级Petri网的建模及性能指标评估方法。文中考虑了信包缓冲区容量有限、信包最大允许服务时间有限、不同的站点优先级和系统总线故障率等因素影响下的网络性能指标评估问题。克服了以往对网络性能评估的某些局限。讨论了带计数禁止弧的高级随机Petri网(HDSPN)的特性.并对给出的DSPN模型进行了仿真计算,对影响网络性能的系统指标进行了讨论。  相似文献   

20.
With emergence of various new Internet‐enabled devices, such as tablet PCs or smart phones along with their own applications, the traffic growth rate is getting faster and faster these days and demands more communication bandwidth at even faster rate than before. To accommodate this ever‐increasing network traffic, even faster Internet routers are required. To respond for these needs, we propose a new mesh of trees based switch architecture, called MOTS(N) switch. In addition, we also propose two more variations of MOTS(N) to further improve it. MOTS(N) is inspired by crossbar with crosspoint buffers. It forms a binary tree for each output line, where each gridpoint buffer ? ? Because the fabric of MOTS(N) switch is not pure crossbar, we call the buffers in the same location in pure crossbar gridpoint buffers. Details will be presented in the following sections.
is a leaf node and each internal node is 2‐in 1‐out merge buffer § § 2‐in 1‐out merge buffer can accommodate two memory writes and one memory read simultaneously by using its modularized architecture 31 .
emulating FIFO queues. Because of this FIFO characteristic of internal buffers, MOTS(N) ensures QoS like FIFO output‐queued switch. The root node of the tree for each output line is the only component connected to the output port where each cell is transmitted to output port without any contention. To limit the number of buffers in MOTS(N) switch, we present one of its improved (practical) variations, IMOTS(N) switch, as well. For IMOTS(N) switch architecture, sizes of the buffers in the fabric are limited by a certain amount. As a downside of IMOTS(N), however, every cell should go through log 2N + 1 number of buffers in the fabric to be transmitted to the designated output line. Therefore, for even further improvement, IMOTS(N) with cut‐through, denoted as IMOTSCT(N), is also proposed in this paper. In IMOTSCT(N) switch, the cells can cut through one or more empty buffers to be transferred from inputs to outputs with simple 1 or 2 bit signal exchanges between buffers. We analyze the throughput of MOTS(N), IMOTS(N), and IMOTSCT(N) switches and show that they can achieve 100% throughput under Bernoulli independent and identically distributed uniform traffic. Our quantitative simulation results validate the theoretical analysis. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号