首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 687 毫秒
1.
This paper seeks to enhance network survivability under a disaster and reduce the expected post-disaster response time for transportation networks through pre-disaster investment decisions. The planning focuses on determining the links of the network to strengthen through investment under two types of uncertainties: the disaster characteristics, and the surviving network under each disaster. A bi-level stochastic optimization model is proposed for this problem, in which link investment decisions are made at the upper level to enhance the network survivability subject to a budget constraint such that the expected post-disaster response time is minimized at the lower level. A two-stage heuristic algorithm is proposed to obtain effective solutions efficiently. The numerical experiments indicate that the proposed heuristic algorithm converges to a fixed point representing a feasible solution, within an acceptable tolerance level, of the bi-level stochastic optimization model which is an effective solution under disasters of moderate severity. Parametric and sensitivity analyses reinforce the need for a holistic approach that integrates multiple relevant considerations to determine the link investment decisions.  相似文献   

2.
The objective is to minimize expected travel time from any origin to a specific destination in a congestible network with correlated link costs. Each link is assumed to be in one of two possible conditions. Conditional probability density functions for link travel times are assumed known for each condition. Conditions over the traversed links are taken into account for determining the optimal routing strategy for the remaining trip. This problem is treated as a multistage adaptive feedback control process. Each stage is described by the physical state (the location of the current decision point) and the information state (the service level of the previously traversed links). Proof of existence and uniqueness of the solution to the basic dynamic programming equations and a solution procedure are provided.  相似文献   

3.
Damage to infrastructure, especially to highways and roads, adversely affects accessibility to disaster areas. Predicting accessibility to demand points from the supply points by a systematic model would lead to more effective emergency facility location decisions. To this effect, we model the spatial impact of the disaster on network links by random failures with dependency such that failure of a link induces failure of nearby links that are structurally more vulnerable. For each demand point, a set of alternative paths is generated from each potential supply point so that the shortest surviving path will be used for relief transportation after the disaster. The objective is to maximize the expected demand coverage within a specified distance over all possible network realizations. To overcome the computational difficulty caused by extremely large number of possible outcomes, we propose a tabu search heuristic that evaluates candidate solutions over a sample of network scenarios. The scenario generation algorithm that represents the proposed distance and vulnerability based failure model is the main contribution of our study. The tabu search algorithm is applied to Istanbul earthquake preparedness case with a detailed analysis comparing solutions found in no link failure, independent link failure, and dependent link failure cases. The results show that incorporating dependent link failures to the model improves the covered demand percentages significantly.  相似文献   

4.
This paper considers the problem of selecting the optimum capacities of the links in a computer communication network which employs unreliable links. Given the nodes, links, link probabilities, grade of service and cost functions of the network, the objective of this problem is to find the optimum link capacities that minimize the network design cost, subject to the constraint equation involving the grade of service. This is essentially a combinatorial optimization problem. A general methematical model for this problem is formulated and a set of feasible solutions is obtained using Lagrangean relaxation and subgradient optimization techniques. A simulation study has been performed to verify the model, and favourable results obtained for a variety of nontrivial networks.  相似文献   

5.
In recent years, there are substantial demands to reduce packet loss on the Internet. Among the proposed schemes, finding backup paths in advance is considered to be an effective method to reduce the reaction time. Very commonly, a backup path is chosen to be the most disjoint path from the primary path, or on the network level, a backup path is computed for each link (e.g., IPFRR). The validity of this straightforward choice is based on two things. The first thing is all the links may have the equal likelihood to fail; the second thing is, facing the high protection requirement today, it just looks weird to have links not protected or to share links between the primary and backup paths. Nevertheless, many studies have confirmed that the individual vulnerability of the links on the Internet is far from being equal. In addition, we have observed that full protection schemes (In this paper, full protection schemes means schemes (1) in which backup path is a most disjoint path from the primary path; or (2) which compute backup path for each link.) may introduce high cost (e.g., computation).In this paper, we argue that such approaches may not be cost-efficient and therefore propose a novel critical protection scheme based on link failure characteristics. Firstly, we analyze the link failure characteristics based on real world traces of CERNET2 (China Education and Research NETwork 2). The analysis results clearly show that the failure probabilities of the links in CERNET2 backbone are heavy-tailed, i.e., a small set of links causing most of the failures. Based on this observation, we find out two key parameters which strongly impact link criticality and propose a critical protection scheme for both single link failure situation and multi-link failure situation. We carefully analyze the implementation details and overhead for backup path schemes of the Internet today; the problem is formulated as an optimization problem to guarantee the routing performance and minimize the backup cost. This cost is special as it involves computational overhead. Based on this, we propose a novel Critical Protection Algorithm which is fast itself for both the single link failure and the multi-link failure versions. A comprehensive set of evaluations with randomly generated topologies, real world topologies and the real traces from CERNET2, shows that our scheme gains significant achievement over full protection in both single link failure situation and multi-link failure situation. It costs only about 30–60% of the full protection cost when the network relative availability increment is 90% of the full protection scheme.  相似文献   

6.
Intra‐domain routing protocols are based on Shortest Path First (SPF) routing, where shortest paths are calculated between each pair of nodes (routers) using pre‐assigned link weights, also referred to as link metric. These link weights can be modified by network administrators in accordance with the routing policies of the network operator. The operator's objective is usually to minimize traffic congestion or minimize total routing costs subject to the traffic demands and the protocol constraints. However, determining a link weight combination that meets a network operator's objectives is a difficult task. In this paper, we study the link weight optimization problem in intra‐domain networks. This problem is proved to be NP‐hard with hard protocol constraints, e.g., a flow is evenly distributed along the shortest paths between its origin and destination nodes. We present two fast heuristic approaches to generate efficient link metrics for intra‐domain routing. Some promising experimental results are reported.  相似文献   

7.
We consider massively dense ad hoc networks and study their continuum limits as the node density increases and as the graph providing the available routes becomes a continuous area with location and congestion dependent costs. We study both the global optimal solution as well as the non-cooperative routing problem among a large population of users where each user seeks a path from its origin to its destination so as to minimize its individual cost. Finally, we seek for a (continuum version of the) Wardrop equilibrium. We first show how to derive meaningful cost models as a function of the scaling properties of the capacity of the network and of the density of nodes. We present various solution methodologies for the problem: (1) the viscosity solution of the Hamilton–Jacobi–Bellman equation, for the global optimization problem, (2) a method based on Green’s Theorem for the least cost problem of an individual, and (3) a solution of the Wardrop equilibrium problem using a transformation into an equivalent global optimization problem.  相似文献   

8.
针对当前SDN架构存在路由算法复杂度高、QoS流满意度低和单链路故障等问题,提出了一种基于软件定义网络的多约束QoS双路径路由优化算法(SDN_MCQDP)。利用控制器获得全局网络状态信息,生成基于目的节点的有向无环图。在多约束QoS路由选择阶段,通过拉格朗日松弛对偶算法将多约束问题转化为线性规划问题。使用反向链路删减得到满足多约束QoS的节点不相交的双路径冗余链路,使链路故障后的数据传输得到保障。从路由计算时间、链路利用率、QoS流满意度等方面对算法进行仿真实验。结果表明,与MODLARAC、QT、RMCDP_RD、H_MCOP算法比较,SDN_MCQDP能够有效降低传输时延,减少路由计算时间,提高链路利用率,且在链路发生故障后仍能满足QoS需求。  相似文献   

9.
无线传感器节点针对捕获节点攻击可靠性研究   总被引:1,自引:0,他引:1  
无线传感器网络中的节点通常运算和存储能力有限,由于网络本身的特点,数据容易遭到攻击,因此,保密成为一个重要的因素;为解决这一问题,许多文献提出公钥密码系统,但这种方法存在的一个普遍问题是源节点到目的节点密钥的建立;当源节点到目的节点建立新密钥时,它们必须频繁的交换信息实现可靠链接;由于无线连接的不可靠性和电池耗尽等因素无线传感器网络路由机制也要不断改变;传感器节点增加时安全问题尤为严重;提出了一种新的基于机密共享方案数据分配方法应对网络攻击,并通过实验证明这种方法的有效;最后,考虑到安全性,和已有的无线传感器网络安全体系结构TinySec方法进行对比。  相似文献   

10.
在IP Over WDM网络中, WDM层的一条物理链路往往对应多条IP层逻辑链路,无论采用何种机制,都不允许WDM层的故障导致上层逻辑拓扑变的不连通,因此,解决IP Over WDM的生存性映射问题便显得尤为重要。本文在现有IP Over WDM静态拓扑映射算法的基础上提出一种基于单节点故障的映射算法,目的是在WDM层出现单节点故障后, IP层逻辑拓扑不被分割为多个不连通的部分,以保证IP拓扑的连通性。  相似文献   

11.
In this technical note, we address the combined problem of motion and network topology control in a group of mobile agents with common objective the flocking behavior of the group. Instead of assuming network connectivity, we enforce it by means of distributed topology control that decides on both deletion and creation of communication links between agents, adapting the network to the group's spatial distribution. With this protocol ensuring network connectivity, a decentralized motion controller aligns agent velocity vectors and regulates inter-agent distances to maintain existing network links. The stability of the flocking controller is established in continuous time by means of an observability argument on a quadratic form of the graph Laplacian that exploits the time delay between link deletion and creation caused by the topology control protocol, which induces a dwell time between network switches.   相似文献   

12.
给定由若干连边和节点组成的网络系统,为了有效、经济地提升整个网络的可靠性,一些耦合的2条连边关于整个网络失效的交互机理需要加以分析.首先,采用饱和非时齐泊松过程刻画连边的失效过程,基于组合计数的思想,导出2条连边处于4种不同状态的概率公式,并结合2条连边的联合D-谱,发展联合失效重要度的计算公式,用于分析2条连边关于网络失效的交互机理.理论分析表明,当时间t趋于0或趋于无穷大时,2条连边的交互效果越来越微弱.然后,由于精确的计算联合失效重要度的值是NP-难问题,设计蒙特卡洛近似算法求其值.最后,提供一个路网的算例,其数值结果表明,所提出联合失效重要度计算方法能够有效地阐释2条连边关于网络失效的交互机理.  相似文献   

13.
The simplicity of the hypertext model behind the World Wide Web is a factor in its success, but this simplicity brings limitations. One of these limitations is embedding links in documents. Open Hypermedia addresses this by instead storing them in separate link databases. Meanwhile, the Adaptive Hypermedia approach seeks to enhance a user's experience by inserting personalised additional content and links on the web page. However, these techniques do not offer the user any control over the adaptation. In this paper, we propose the concept of a multi-dimensional linkbase for adaptive links presentation. Links are created and stored in a single, multi-dimensional, linkbase that provides presentation links based on the user's preferences and profile. We present a web-based system Inquiry-led Personalised Navigation System that implements this multi-dimensional concept for controlling its personalisation of hyperlinks. We give the results of our evaluation, which confirm that user-controlled adaptation is a satisfactory approach to providing users with control over personalisation, and can alleviate the link overload problem.  相似文献   

14.
Optimal utilization of resources in present-day communication networks is a challenging task. Routing plays an important role in achieving optimal resource utilization. The open shortest path first (OSPF) routing protocol is widely used for routing packets from a source node to a destination node. This protocol assigns weights (or costs) to the links of a network. These weights are used to determine the shortest path between all sources to all destination nodes. Assignment of these weights to the links is classified as an NP-hard problem. This paper formulates the OSPF weight setting problem as a multi-objective optimization problem, with maximum utilization, number of congested links, and number of unused links as the optimization objectives. Since the objectives are conflicting in nature, an efficient approach is needed to balance the trade-off between these objectives. Fuzzy logic has been shown to efficiently solve multi-objective optimization problems. A fuzzy cost function for the OSPF weight setting problem is developed in this paper based on the Unified And-OR (UAO) operator. Two iterative heuristics, namely, simulated annealing (SA) and simulated evolution (SimE) have been implemented to solve the multi-objective OSPF weight setting problem using a fuzzy cost function. Results are compared with that found using other cost functions proposed in the literature (Sqalli et al. in Network Operations and Management Symposium, NOMS, 2006). Results suggest that, overall, the fuzzy cost function performs better than existing cost functions, with respect to both SA and SimE. Furthermore, SimE shows superior performance compared to SA. In addition, a comparison of SimE with NSGA-II shows that, overall, SimE demonstrates slightly better performance in terms of quality of solutions.  相似文献   

15.
Consider a situation where p facilities need to be located by a leader, on the nodes of a network, to provide maximum coverage of demand generated at nodes of the network. At some point in the future it is expected that one of the links of the network will become unusable either due to a terrorist attack or a natural disaster (by the follower). The follower's objective is which link to remove. The leader's objective is to cover the most demand following such a damage to a link. The problem is formulated and analyzed from the leader's perspective. An efficient approach to solving the follower's problem is constructed. The leader's problem is solved heuristically by an ascent algorithm, simulated annealing, and tabu search, using the efficient algorithm for the solution of the follower's problem. Computational experiments on 40 test problems ranging between 100 and 900 nodes and 5–200 facilities provided good results.  相似文献   

16.
This paper presents a redundant multicast routing problem in multilayer networks that arises from large-scale distribution of realtime multicast data (e.g., Internet TV, videocasting, online games, stock quotes). Since these multicast services commonly operate in multilayer networks, the communications paths need to be robust against a single router or link failure as well as multiple such failures due to shared risk link groups (SRLGs). The main challenge of this multicast is to ensure the service availability and reliability using a path protection scheme, which is to find a redundant path that is SRLG-disjoint (diverse) from each working path. The objective of this problem is, therefore, to find two redundant multicast trees, each from one of the two redundant sources to every destination, at a minimum total communication cost whereas two paths from the two sources to every destination are guaranteed to be SRLG-diverse (i.e., links in the same risk group are disjoint). In this paper, we present two new mathematical programming models, edge-based and path-based, for the redundant multicast routing problem with SRLG-diverse constraints. Because the number of paths in path-based model grows exponentially with the network size, it is impossible to enumerate all possible paths in real life networks. We develop three approaches (probabilistic, non-dominated and nearly non-dominated) to generate potentially good paths that may be included in the path-based model. This study is motivated by emerging applications of internet-protocol TV service, and we evaluate the proposed approaches using real life network topologies. Our empirical results suggest that both models perform very well, and the nearly non-dominated path approach outperforms all other path generation approaches.  相似文献   

17.
Deflection routing can be used in networks whose stations have the same number of input and output links. Fixed-length packets arrive synchronously on the station's input links at the beginning output link that offers the shortest path to its destination. Since the number of packet buffers at each output link is finite, the simultaneous contention of two packets for the last buffer of the common output link must be resolved by “deflecting” one of the packets according to a specified criterion (e.g. at random, by destination proximity, or by packet age). Deflection routing can therefore be used with as few as one packet buffer per output link.

The potentially unbounded number of routes that a given packet can take makes analyzing the performance of such networks difficult. Using independence assumptions, we develop an efficient, high-fidelity performance model of deflection routing that allows us to estimated the mean end-to-end packet delay in a network that has any given two-connected topology, a single packet buffer at each output port, and an arbitrary traffic matrix.  相似文献   


18.
Hub networks are commonly used in telecommunications and logistics to connect origins to destinations in situations where a direct connection between each origin–destination (o‐d) pair is impractical or too costly. Hubs serve as switching points to consolidate and route traffic in order to realize economies of scale. The main decisions associated with hub‐network problems include (1) determining the number of hubs (p), (2) selecting the p‐nodes in the network that will serve as hubs, (3) allocating non‐hub nodes (terminals) to up to r‐hubs, and (4) routing the pairwise o‐d traffic. Typically, hub location problems include all four decisions while hub allocation problems assume that the value of p is given. In the hub median problem, the objective is to minimize total cost, while in the hub center problem the objective is to minimize the maximum cost between origin–destination pairs. We study the uncapacitated (i.e., links with unlimited capacity) r‐allocation p‐hub equitable center problem (with) and explore alternative models and solution procedures.  相似文献   

19.
在PTN(PacketTransportNetwork)网络规划建设中,需要对光纤链路留出备份带宽,以保证部分光纤断开时,受影响业务有足够的容量进行路由重组。这也是提高网络生存性的有效方法之一。文中首先遍历网络双链路的失效状态,然后断开网络中任意两条链路,通过逐次增加链路容量来保证失效业务能够重组路由;最后提出二次断纤链路容量规划算法并进行试验仿真。结果表明,该算法在节约网络带宽、降低建造成本以及故障容错方面有着良好性能,能够很好应用于传送网络的链路规划中。  相似文献   

20.
In this paper we study the network planning problem of bi-directional self-healing ring (BSHR), which is a network structure providing higher survivability when there is a failure on link or node. Given a network with nodes, links, and demand pairs, our target is to design an optimal network comprising rings, which use only the existing links to satisfy all demands. The objective is to minimize the total amount of equipment (add/drop multiplexer) on nodes, thus reducing the major cost of SHR structure. We propose two integer programming models. For larger networks, we have developed an efficient solution procedure based on its hierarchical network structure. Computational results are given to show that the solution procedure is effective in obtaining an optimal or near-optimal solution.Scope and purposeThe merging of information networking and telecommunication services has created an increasing demand for telecommunication networks of high bandwidth, aiming to exchange ever larger volumes of data in a very short time interval. The self-healing ring (SHR) is a ring network that provides redundant bandwidth in which disrupted services can be automatically restored following network failures. In this paper, we study the network planning problem of bi-directional SHR aiming to minimize the amount of equipment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号