首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
We consider the maximum disjoint paths problem and its generalization, the call control problem, in the on-line setting. In the maximum disjoint paths problem, we are given a sequence of connection requests for some communication network. Each request consists of a pair of nodes, that wish to communicate over a path in the network. The request has to be immediately connected or rejected, and the goal is to maximize the number of connected pairs, such that no two paths share an edge. In the call control problem, each request has an additional bandwidth specification, and the goal is to maximize the total bandwidth of the connected pairs (throughput), while satisfying the bandwidth constraints (assuming each edge has unit capacity). These classical problems are central in routing and admission control in high speed networks and in optical networks.We present the first known constant-competitive algorithms for both problems on the line. This settles an open problem of Garay et al. and of Leonardi. Moreover, to the best of our knowledge, all previous algorithms for any of these problems, are (logn)-competitive, where n is the number of vertices in the network (and obviously noncompetitive for the continuous line). Our algorithms are randomized and preemptive. Our results should be contrasted with the (logn) lower bounds for deterministic preemptive algorithms of Garay et al. and the (logn) lower bounds for randomized non-preemptive algorithms of Lipton and Tomkins and Awerbuch et al. Interestingly, nonconstant lower bounds were proved by Canetti and Irani for randomized preemptive algorithms for related problems but not for these exact problems.  相似文献   

2.
李超  林亚平 《计算机工程》2004,30(22):101-103
针对无线网提出了一种基于带宽估计的拥塞控制机制。该机制利用TCP确认帧携带的数据包到达时间来估算包到达速率,从而得到带宽的估计值。在此基础上用带宽的估计值更新拥塞窗口,避免在发生链路错误时启动拥塞控制机制,由此提高了TCP在无线网上的性能。实验结果表明,算法能减少链路差错对TCP性能带来的影响,提高了TcP在无线网上的吞吐率。  相似文献   

3.
We review two approaches, the Standard Clock (SC) technique and Augmented System Analysis (ASA), that have been proposed for generating sample paths of Discrete Event Systems (DES) in parallel. These are placed in the unifying framework of the fundamentalsample path constructability problem: for a finite discrete parameter set = {1, ..., m }given a sample path under 1 the problem is to simultaneously construct sample paths under all remaining parameter values. Using the ASA approach we then consider the problem of smoothing arbitrary, generally bursty, and possibly nonstationary traffic processes which are encountered in many applications, especially in the area of flow control for integrated-service, high-speed networks. We derive some basic structural properties of a smoothing scheme known as the Leaky Bucket (LB) mechanism through which it is seen that the variability of a traffic process can be monotonically decreased by decreasing an integer-valued parameter of this scheme. Finally, we show that a sample path under any value of this parameter is constructable with respect to an observed sample path under any other value. Therefore, by controlling this parameter on line, we show how simple iterative optimization schemes can be used to achieve typical design objectives such as keeping both the mean packet delay due to smoothing and the variability of the traffic process low.  相似文献   

4.
In this paper we discuss the experimental implementation of a chemical reaction–diffusion processor for robot motion planning in terms of finding the shortest collision-free path for a robot moving in an arena with obstacles. These reaction–diffusion chemical processors for robot navigation are not designed to compete with existing silicon-based controllers. These controllers are intended for the incorporation into future generations of soft-bodied robots built of electro- and chemo-active polymers. In this paper we consider the notion of processing as being implicit in the physical medium constituting the body of a soft robot. This work therefore represents some early steps in the employment of excitable media controllers. An image of the arena in which the robot is to navigate is mapped onto a thin-layer chemical medium using a method that allows obstacles to be represented as local changes in the reactant concentrations. Disturbances created by the objects generate diffusive and phase wave fronts. The spreading waves approximate to a repulsive field generated by the obstacles. This repulsive field is then inputted into a discrete model of an excitable reaction–diffusion medium, which computes a tree of shortest paths leading to a selected destination point. Two types of chemical processors are discussed: a disposable palladium processor, which executes arena mapping from a configuration of obstacles, given before an experiment and, a reusable Belousov–Zhabotinsky processor which allows for online path planning and adaptation for dynamically changing configurations of obstacles.  相似文献   

5.
Advance object oriented computing platform such as the Common Object Request Broker Architecture (CORBA) provides a conducive and standardized framework for the development of distributed applications. Most of the off-the-shelf CORBA are implemented over legacy network transports and distributed processing platforms such as TCP/IP and RPC. They are not suitable for real-time applications due to their high processing overheads, and lack of features and mechanisms in supporting quality of service both at the network level and at the end-host level. To overcome this limitation we have designed and implemented a CORBA-based Real Time Stream Service (RTSS) that allows real-time streams to be managed through the CORBA channel but by-passing the heavy CORBA protocol stacks. RTSS aims to achieve an integrated QOS framework that incorporates both host scheduling and end-to-end network-level QOS to better support the processing of distributed multimedia applications over ATM networks. For host scheduling, a novel scheme of frequency-based scheduling mechanism has been proposed to cope with dynamic CPU load condition. The scheme has been implemented for a stand-alone host and will be extended to the networked environment. At the network-level QOS, RTSS provides object-oriented application programming interfaces (APIs) which guarantee end-to-end QOS when operating directly over ATM adaptation layers. The benefits of RTSS for the development of real-time multimedia distributed applications are demonstrated through a number of experiments.  相似文献   

6.
We study path integration on a quantum computer that performs quantum summation. We assume that the measure of path integration is Gaussian, with the eigenvalues of its covariance operator of order j-k with k>1. For the Wiener measure occurring in many applications we have k=2. We want to compute an -approximation to path integrals whose integrands are at least Lipschitz. We prove: Path integration on a quantum computer is tractable. Path integration on a quantum computer can be solved roughly -1 times faster than on a classical computer using randomization, and exponentially faster than on a classical computer with a worst case assurance. The number of quantum queries needed to solve path integration is roughly the square root of the number of function values needed on a classical computer using randomization. More precisely, the number of quantum queries is at most 4.46 -1. Furthermore, a lower bound is obtained for the minimal number of quantum queries which shows that this bound cannot be significantly improved. The number of qubits is polynomial in -1. Furthermore, for the Wiener measure the degree is 2 for Lipschitz functions, and the degree is 1 for smoother integrands. PACS: 03.67.Lx; 31.15Kb; 31.15.-p; 02.70.-c  相似文献   

7.
当TCP协议应用于异构网络时业务的QoS无法得到保障,具体而言,一方面TCP协议与无线网络适配性较差,导致网络丢包率升高、吞吐量下降;另一方面,TCP协议对于业务不区分优先级,不能满足高优先级业务的需求。因此,文章提出了异构网络中基于捕食模型的保障业务QoS的TCP协议(QoS - Internet Predator based on Prey Model,Q-IPPM)。Q-IPPM算法不仅会根据异构网络的带宽改进TCP协议的拥塞控制架构,还根据不同业务的负载状况和优先级控制不同业务流量占比。实验结果表明,Q-IPPM算法不仅可以提显著提高异构网络的吞吐量,降低网络丢包率,还能够根据业务优先级和负载状况将带宽“按需分配”给不同业务,从而保障了异构网络中业务的QoS。  相似文献   

8.
Dhananjay S.  Tom  Jim 《Computer Networks》2003,43(6):787-804
With ubiquitous computing and network access now a reality, multiple network conduits are become widely available to mobile as well as static hosts: for instance wired connections, 802.11 style wireless LANs, Bluetooth, and cellular phone modems. Selection of the preferred mode of data transfer is a dynamic optimization problem which depends on the type of application, its bandwidth/latency/jitter requirements, current network conditions (such as congestion or traffic patterns), cost, power consumption, battery life, and so on. Furthermore, since wireless bandwidth is likely to remain a scarce resource, we foresee scenarios wherein mobile hosts will require simultaneous data transfer across multiple IP interfaces to obtain higher overall bandwidth.We present a brief overview of existing work which enables the simultaneous use of multiple network interfaces and identify the applicability as well as strengths and weaknesses of these related approaches. We then propose a new mechanism to aggregate the bandwidth of multiple IP paths by splitting a data flow across multiple network interfaces at the IP level. We have analyzed the performance characteristics of our aggregation scheme and demonstrate significant gains when the network paths being aggregated have similar bandwidth and latency characteristics. In addition, our method is transparent to transport (TCP/UDP) and higher layers, and allows the use of multiple network interfaces to enhance reliability. Our analysis identifies the conditions under which the proposed scheme, or any other scheme that stripes a single TCP connection across multiple IP paths, can be used to increase throughput.  相似文献   

9.
高速网络带宽测量方法研究   总被引:3,自引:1,他引:2       下载免费PDF全文
王乐晓  张延园  赵晓楠 《计算机工程》2010,36(15):103-104,107
针对单个低速节点的网卡吞吐量无法提供被测链路所需带宽的问题,基于TCP协议,通过多节点并行生成网络流量方式向被测系统输出高带宽负载,从而对被测链路进行带宽测量。实验验证了该并行测量方法的可行性,并通过分析得到TCP协议中影响被测链路带宽的主要因素为TCP窗口大小与缓冲区大小。  相似文献   

10.
Horst  Steven 《Minds and Machines》1999,9(3):347-381
Over the past several decades, the philosophical community has witnessed the emergence of an important new paradigm for understanding the mind.1 The paradigm is that of machine computation, and its influence has been felt not only in philosophy, but also in all of the empirical disciplines devoted to the study of cognition. Of the several strategies for applying the resources provided by computer and cognitive science to the philosophy of mind, the one that has gained the most attention from philosophers has been the Computational Theory of Mind (CTM). CTM was first articulated by Hilary Putnam (1960, 1961), but finds perhaps its most consistent and enduring advocate in Jerry Fodor (1975, 1980, 1981, 1987, 1990, 1994). It is this theory, and not any broader interpretations of what it would be for the mind to be a computer, that I wish to address in this paper. What I shall argue here is that the notion of symbolic representation employed by CTM is fundamentally unsuited to providing an explanation of the intentionality of mental states (a major goal of CTM), and that this result undercuts a second major goal of CTM, sometimes refered to as the vindication of intentional psychology. This line of argument is related to the discussions of derived intentionality by Searle (1980, 1983, 1984) and Sayre (1986, 1987). But whereas those discussions seem to be concerned with the causal dependence of familiar sorts of symbolic representation upon meaning-bestowing acts, my claim is rather that there is not one but several notions of meaning to be had, and that the notions that are applicable to symbols are conceptually dependent upon the notion that is applicable to mental states in the fashion that Aristotle refered to as paronymy. That is, an analysis of the notions of meaning applicable to symbols reveals that they contain presuppositions about meaningful mental states, much as Aristotle's analysis of the sense of healthy that is applied to foods reveals that it means conducive to having a healthy body, and hence any attempt to explain mental semantics in terms of the semantics of symbols is doomed to circularity and regress. I shall argue, however, that this does not have the consequence that computationalism is bankrupt as a paradigm for cognitive science, as it is possible to reconstruct CTM in a fashion that avoids these difficulties and makes it a viable research framework for psychology, albeit at the cost of losing its claims to explain intentionality and to vindicate intentional psychology. I have argued elsewhere (Horst, 1996) that local special sciences such as psychology do not require vindication in the form of demonstrating their reducibility to more fundamental theories, and hence failure to make good on these philosophical promises need not compromise the broad range of work in empirical cognitive science motivated by the computer paradigm in ways that do not depend on these problematic treatments of symbols.  相似文献   

11.
Network coding is considered as a promising technique to increase the bandwidth available in a wireless network. Many studies show that network coding can improve flow throughput only if an appropriate routing algorithm is used to identify paths with coding opportunities. Nevertheless, a good routing mechanism is very difficult to develop. Existing solutions either do not estimate the path bandwidth precisely enough or cannot identify the best path in some situations. In this paper, we describe our coding-aware routing protocol that provides a better path bandwidth estimate and is able to identify high throughput paths. Extensive NS2 simulations show that our protocol outperforms existing mechanisms.  相似文献   

12.
The primary purpose of parallel computation is the fast execution of computational tasks that are too slow to perform sequentially. However, it was shown recently that a second equally important motivation for using parallel computers exists: Within the paradigm of real-time computation, some classes of problems have the property that a solution to a problem in the class computed in parallel is better than the one obtained on a sequential computer. What represents a better solution depends on the problem under consideration. Thus, for optimization problems, better means closer to optimal. Similarly, for numerical problems, a solution is better than another one if it is more accurate. The present paper continues this line of inquiry by exploring another class enjoying the aforementioned property, namely, cryptographic problems in a real-time setting. In this class, better means more secure. A real-time cryptographic problem is presented for which the parallel solution is provably, considerably, and consistently better than a sequential one.It is important to note that the purpose of this paper is not to demonstrate merely that a parallel computer can obtain a better solution to a computational problem than one derived sequentially. The latter is an interesting (and often surprising) observation in its own right, but we wish to go further. It is shown here that the improvement in quality can be arbitrarily high (and certainly superlinear in the number of processors used by the parallel computer). This result is akin to superlinear speedup—a phenomenon itself originally thought to be impossible.  相似文献   

13.
In this paper we use free fall approach to develop a high level control/command strategy for a bipedal robot called BIPMAN, based on a multi-chain mechanical model with a general control architecture. The strategy is composed of three levels: the Legs and arms level, the Coordinator level and the Supervisor level. The Coordinator level is devoted to controlling leg movements and to ensure the stability of the whole biped. Actually perturbation effects threaten the equilibrium of the human robot and can only be compensated using a dynamic control strategy. This one is based on dynamic stability studies with a center of mass acceleration control and a force distribution on each leg and arm. Free fall in the gravity field is assumed to be deeply involved in the human locomotor control. According to studies of this specific motion through a direct dynamic model,the notion of equilibrium classes is introduced. They allow one to define time intervals in which the biped is able to maintain its posture. This notion is used for the definition of a reconfigurable high level control of the robot.  相似文献   

14.
GridFTP is a secure and reliable high-performance parallel data transfer protocol used for transferring massive amounts of widely distributed data. Currently it allows users to configure the number of parallel streams and socket buffer size. However, its tuning procedure for optimal combination is a time consuming task. The socket handlers and buffers are important system resources and must therefore be carefully managed. In this paper, we propose a scheme to achieve high throughput even with a smaller buffer size, and also derive a regression equation to predict the optimal combination of resources for a connection. To improve the performance, the TCP based on our scheme obtains higher throughput and spends less memory for the same throughput than the original TCP scheme. In addition, the regression equation is verified by comparing measured and predicted values, and we apply the equation to an actual experiment on the KOrea advanced REsearch Network (KOREN). The result demonstrates that the equation provides excellent predictions with only an 8% error boundary.  相似文献   

15.
The transmission bandwidth between two nodes in mobile ad hoc networks is important in terms of power consumption. However, the bandwidth between two nodes is always treated the same, regardless of what the distance is between the two nodes. If a node equips a GPS device to determine the distance between two nodes, the hardware cost and the power consumption increase. In this paper, we propose using a bandwidth-based power-aware routing protocol with signal detection instead of using GPS devices to determine the distance. In our proposed routing protocol, we use the received signal variation to predict the transmission bandwidth and the lifetime of a link. Accordingly, the possible amount of data that can be transmitted and the remaining power of nodes in the path after data transmission can be predicted. By predicting the possible amount of data that can be transmitted and the remaining power of nodes after data transmission, we can design a bandwidth-based power-aware routing protocol that has power efficiency and that prolongs network lifetime. In our simulation, we compare our proposed routing protocol with two signal-based routing protocols, SSA and ABR, and a power-aware routing protocol, MMBCR, in terms of the throughput, the average transmission bandwidth, the number of rerouting paths, the path lifetime, the power consumed when a byte is transmitted, and the network lifetime (the ratio of active nodes).  相似文献   

16.
Software testing techniques based upon automated analysis of control flow are useful for improving software reliability. The fundamental types of control flow analysis, in ascending order of effectiveness, are statement, branch, and path coverage. Automated tools that perform branch coverage analysis are now accepted practice and researchers are exploring the area that exists between branch and path analysis. Path testing is complicated by the huge number of paths in ordinary programs and by anomalies, such as infeasible paths. The Ct testing strategy is a method for obtaining a manageable set of path classes by specifying a minimum iteration count k. The Ct test coverage metric is a measurement of the proportion of Ct path classes that are exercised within a program. This paper describes an initial experiment in which a special, automated tool was used to evaluate the Ct coverage of tests for a C program of significant size. The results of the study showed that high values of branch coverage may not necessarily imply high path coverage, that approximately 15% of the functions analysed were too complex to analyse effectively, and that the majority of the remaining functions achieved on the order of 11% Ct k=1 coverage. Experimental work in the attainment of high Ct coverage suggests the development of a new, more efficient software testing strategy, which combines Ct path and data flow analysis.  相似文献   

17.
CUBIC is a TCP-friendly algorithm that uses a cubic curve, independent of the round-trip time, to rapidly recover from a packet loss. New releases of Linux use CUBIC for the TCP protocol. In this paper, we show that if the socket buffer size of a sender TCP is small compared with the bandwidth-delay product, Linux TCP window size drops to almost zero every time a packet loss occurs. Using this fact, we estimate data uploading time in long distance networks with packet loss. Also we discuss the improvement of the uploading time by increasing cumulative socket buffer size in two ways: large buffer size or parallel connections.  相似文献   

18.
In this paper different algorithms are presented and evaluated for designing Virtual Private/Overlay Network (VPNs/VONs) over any network that supports resource partitioning e.g. ATM (Asynchronous Transfer Mode), MPLS (Multi Protocol Label Switching), or SDH/SONET (Synchronous Digital Hierarchy/Synchronous Optical Networking). All algorithms incorporate protection as well. The VPNs/VONs are formed by full mesh demand sets between VPN/VON endpoints. The service demands of VPNs/VONs are characterized by the bandwidth requirements of node-pairs (pipe-model).We investigated four design modes with three pro-active path based shared protection path algorithms and four heuristics to calculate the pairs of paths. The design mode determines the means of traffic concentration. The protection path algorithms use Dijkstras shortest path calculation with different edge weights. The demands are routed one-by-one, therefore the order in which they are processed matters.To eliminate this factor we used three heuristics (simulated allocation, simulated annealing, threshold accepting). We present numerical results obtained by simulation regarding the required total amount of capacity, the number of reserved edges, and the average length of paths.Péter Hegyi received MSc (2004) degree from the Budapest University of Technology and Economics, Hungary, where he is currently a PhD student at the Department of Telecommunications and Media Informatics. His research interests focus on design of intra- and inter-domain multilayer grooming networks and routing with protection. He has been involved in a few related projects (IKTA, ETIK, NOBEL).Markosz Maliosz is a researcher in the High Speed Networks Laboratory, Department of Telecommunication and Media Informatics at the Budapest University of Technology and Economics, where he received his MSc degree in Computer Science (1998). He has participated in projects concerning telecommunication services, network device control, Voice and Video over IP. His current research areas are Virtual Private Networking and traffic engineering in optical networks.Ákos Ladányi is a student at the Department of Telecommunications and Media Informatics at the Budapest University of Technology and Economics. His research interests focus on routing, network resilience, and combinatorial optimization.Tibor Cinkler has received MSc(94) and PhD(99) degrees from the Budapest University of Technology and Economics, Hungary, where he is currently Associate Professor at the Department of Telecommunications and Media Informatics. His research interests focus on routing, design, configuration, dimensioning and resilience of IP,MPLS, ATM, ngSDH and particularly of WR-DWDMbased multilayer networks. He is the author of over 60 refereed scientific publications and of 3 patents.  相似文献   

19.
This study demonstrates an objective method used to evaluate the enhanceability of commercial software. It examines the relationship between enhancement and repair, and suggests that enhancement be considered when developing formal models of defect cause. Another definition of defect-prone software is presented that concentrates attention on software that requires unusually high repair considering the magnitude of planned enhancement.  相似文献   

20.
Conditions are presented under which the maximum of the Kolmogorov complexity (algorithmic entropy) K(1... N ) is attained, given the cost f( i ) of a message 1... N . Various extremal relations between the message cost and the Kolmogorov complexity are also considered; in particular, the minimization problem for the function f( i ) – K(1... N ) is studied. Here, is a parameter, called the temperature by analogy with thermodynamics. We also study domains of small variation of this function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号