首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Computer Networks》2007,51(12):3354-3367
The IP lookup mechanism is a key issue, in the design of IP routers. IP lookup is an important action in a router, which finds the next hop of each incoming packet with a longest-prefix-match address in the routing table. This work places the routing table on a longest prefix first search tree, which is constructed as a heap-like structure by the prefix length. A router using this scheme has fewer memory accesses when executing IP lookup than a router designed according to the Trie [E. Fredkin, Trie Memory, Communication of the ACM 3 (1960) 490–500], Patricia [K. Sklower, A tree-based routing table for Berkeley Unix, in: Proceedings of the USENIX Conference, 1991, pp. 93–99] or Prefix tree [M. Berger, IP lookup with low memory requirement and fast update, in: Proceedings of IEEE High Performance Switching and Routing, 2003, pp. 287–291]. Some nodes of the proposed tree can include two entries of the routing table to decrease the number of tree nodes. For instance, a routing table with 163,695 entries can be held in the proposed tree with 156,191 nodes. Furthermore, an improved scheme is presented to partition a tree into several smaller trees. The simulation reveals that the scheme not only lowers the tree height effectively but also scales well to IPv6 addresses.  相似文献   

2.
Due to a tremendous increase in internet traffic, backbone routers must have the capability to forward massive incoming packets at several gigabits per second. IP address lookup is one of the most challenging tasks for high-speed packet forwarding. Some high-end routers have been implemented with hardware parallelism using ternary content addressable memory (TCAM). However, TCAM is much more expensive in terms of circuit complexity as well as power consumption. Therefore, efficient algorithmic solutions are essentially required to be implemented using network processors as low cost solutions.Among the state-of-the-art algorithms for IP address lookup, a binary search based on a balanced tree is effective in providing a low-cost solution. In order to construct a balanced search tree, the prefixes with the nesting relationship should be converted into completely disjointed prefixes. A leaf-pushing technique is very useful to eliminate the nesting relationship among prefixes [V. Srinivasan, G. Varghese, Fast address lookups using controlled prefix expansion, ACM Transactions on Computer Systems 17 (1) (1999) 1-40]. However, it creates duplicate prefixes, thus expanding the search tree.This paper proposes an efficient IP address lookup algorithm based on a small balanced tree using entry reduction. The leaf-pushing technique is used for creating the completely disjointed entries. In the leaf-pushed prefixes, there are numerous pairs of adjacent prefixes with similarities in prefix strings and output ports. The number of entries can be significantly reduced by the use of a new entry reduction method which merges pairs with these similar prefixes. After sorting the reduced disjointed entries, a small balanced tree is constructed with a very small node size. Based on this small balanced tree, a native binary search can be effectively used in address lookup issue. In addition, we propose a new multi-way search algorithm to improve a binary search for IPv4 address lookup. As a result, the proposed algorithms offer excellent lookup performance along with reduced memory requirements. Besides, these provide good scalability for large amounts of routing data and for the address migration toward IPv6. Using both various IPv4 and IPv6 routing data, the performance evaluation results demonstrate that the proposed algorithms have better performance in terms of lookup speed, memory requirement and scalability for the growth of entries and IPv6, as compared with other algorithms based on a binary search.  相似文献   

3.
Region-Based Memory Management   总被引:1,自引:0,他引:1  
This paper describes a memory management discipline for programs that perform dynamic memory allocation and de-allocation. At runtime, all values are put intoregions. The store consists of a stack of regions. All points of region allocation and de-allocation are inferred automatically, using a type and effect based program analysis. The scheme does not assume the presence of a garbage collector. The scheme was first presented in 1994 (M. Tofte and J.-P. Talpin,in“Proceedings of the 21st ACM SIGPLAN–SIGACT Symposium on Principles of Programming Languages,” pp. 188–201); subsequently, it has been tested in The ML Kit with Regions, a region-based, garbage-collection free implementation of the Standard ML Core language, which includes recursive datatypes, higher-order functions and updatable references L. Birkedal, M. Tofte, and M. Vejlstrup, (1996),in“Proceedings of the 23 rd ACM SIGPLAN–SIGACT Symposium on Principles of Programming Languages,” pp. 171–183. This paper defines a region-based dynamic semantics for a skeletal programming language extracted from Standard ML. We present the inference system which specifies where regions can be allocated and de-allocated and a detailed proof that the system is sound with respect to a standard semantics. We conclude by giving some advice on how to write programs that run well on a stack of regions, based on practical experience with the ML Kit.  相似文献   

4.
We consider the Block PRAM model of Aggarwal et al. (in "Proceedings, First Annual ACM Symposium on Parallel Algorithms and Architectures, 1989," pp. 11-21). For a Block PRAM model with n/log n processors and communication latency l = O(log n), we show that prefix sums can be performed in time O(l log n/log l), but list ranking requires time Ω(l log n); these bounds are tight. These results justify an intuitive observation of Gazit et al. (in "Proceedings, 1987 Princeton Workshop on Algorithm, Architecture and Technology Issues for Models of Concurrent Computation," pp. 139-156) that algorithm designers should, when possible, replace the list ranking procedure with the prefix sums procedure. We demonstrate the value of this technique in choosing between two optimal PRAM algorithms for finding the connected components of dense graphs. We also give theoretical improvements for integer sorting and many other algorithms based on prefix sums, and suggest a relationship between the issue of graph density for the connected components problem and alternative approaches to integer sorting.  相似文献   

5.
《Computer Communications》2001,24(5-6):512-524
A key issue in the design of source-based multicast congestion control schemes is how to aggregate loss indications from multiple receivers into a single rate control decision at the source. Such aggregation entails filtering out a portion of the loss indications received by the source, and then using the remaining for rate adjustments. In this paper, we first propose a set of goals guiding the design of loss indication filters. We then present a novel loss indication filtering approach, the linear proportional response (LPR) approach. Analysis and simulation is used to compare LPR to two well-known approaches — the random listening algorithm (RLA) [Proceedings of ACM SIGCOMM (1998)] and the worst estimate-based tracking (WET) [Proceedings of IEEE Infocom (1999)] approach. Our results indicate that LPR achieves a desirable tradeoff between stability and response, thereby making it more suitable than WET and RLA for deployment in an Internet-like environment.  相似文献   

6.
We present two new algorithms, Arc Length and Peer Count, for choosing a peer uniformly at random from the set of all peers in Chord (Proceedings of the ACM SIGCOMM 2001 Technical Conference, 2001). We show analytically that, in expectation, both algorithms have latency O(log n) and send O(log n) messages. Moreover, we show empirically that the average latency and message cost of Arc Length is 10.01log n and that the average latency and message cost of Peer Count is 20.02log n. To the best of our knowledge, these two algorithms are the first fully distributed algorithms for choosing a peer uniformly at random from the set of all peers in a Distributed Hash Table (DHT). Our motivation for studying this problem is threefold: to enable data collection by statistically rigorous sampling methods; to provide support for randomized, distributed algorithms over peer-to-peer networks; and to support the creation and maintenance of random links, and thereby offer a simple means of improving fault-tolerance. Research of S. Lewis, J. Saia and M. Young was partially supported by NSF grant CCR-0313160 and Sandia University Research Program grant No. 191445.  相似文献   

7.
Yu  F. Katz  R.H. Lakshman  T.V. 《Micro, IEEE》2005,25(1):50-59
Today's packet classification systems are designed to provide the highest-priority matching result, such as the longest prefix match, even if a packet matches multiple classification rules. However, new network applications demanding multimatch classification - that is, requiring all matching results instead of only the highest-priority match - are emerging. Ternary content-addressable memory is becoming a common extension to network processors, and its capability and speed make it attractive for high-speed networks. The proposed TCAM-based scheme produces multimatch classification results with about 10 times fewer memory lookups than a pure software approach. In addition, their scheme for removing negation in rule sets saves up to 95 percent of the TCAM space used by a straightforward implementation.  相似文献   

8.
In secure group communications, a key server can deliver a “group-oriented” rekey message [C.K. Wong, M.G. Gouda, S.S. Lam, Secure group communications using key graphs, in: Proceedings of ACM SIGCOMM ’98, September 1998, pp. 68–79] to a large number of users efficiently using multicast. For reliable delivery, Keystone [C.K. Wong, S.S. Lam, Keystone: a group key management system, in: Proceedings of International Conference on Telecommunications, Acapulco, Mexico, May 2000] proposed the use of forward error correction (FEC) in an initial multicast, followed by the use of unicast delivery for users that cannot recover their new keys from the multicast. In this paper, we investigate how to limit unicast recovery to a small fraction r of the user population. By specifying a very small r, almost all users in the group will receive their new keys within a single multicast round.We present analytic models for deriving r as a function of the amount of FEC redundant information (denoted by h) and the rekeying interval duration (denoted by T) for both Bernoulli and two-state Markov Chain loss models. From our analyses, we conclude that r decreases roughly at an exponential rate as h increases. We then present a protocol designed to adaptively adjust (h,T) to achieve a specified r. In particular, our protocol chooses from among all feasible (h,T) pairs one with h and T values close to their feasible minima. Our protocol also adapts to an increase in network traffic. Simulation results using ns-2 show that with network congestion our adaptive FEC protocol can still achieve a specified r by adjusting values of h and T.  相似文献   

9.
We study permutation betting markets, introduced by Chen et al. (Proceedings of the ACM Conference on Electronic Commerce, 2007). For these markets, we consider subset bettings in which each trader can bet on a subset of candidates ending up in a subset of positions. We consider the revenue maximization problem for the auctioneer in two main frameworks: the risk-free revenue maximization (studied in Chen et al., Proceedings of the ACM Conference on Electronic Commerce, 2007), and the probabilistic revenue maximization. We also explore the use of some certain knowledge or extra information about the possible outcomes of the market. We first show that finding the optimal revenue in the risk-free model for the subset betting problem is inapproximable. This resolves an open question posed by Chen et al. (Proceedings of the ACM Conference on Electronic Commerce, 2007). In order to identify solvable variants of the problem, we propose the singleton betting language which allows traders to bet an arbitrary value on one candidate for one position. For singleton bettings, we first provide a linear-time implementable necessary and sufficient condition for existence of a solution with positive revenue for any possible outcome. Furthermore, we develop an LP-based polynomial-time algorithm to find the optimum solution of this problem. In addition, we show how to extend this LP-based method to handle some extra information about the possible outcomes. Finally, we consider the revenue maximization problem in a probabilistic setting. For this variant, we observe that the problem of maximizing the expected revenue is polynomial-time solvable, but we show that maximizing the probability of achieving a pre-specified revenue is #P-Complete.  相似文献   

10.
Congestion control is an important building block of a Quality of Service (QoS) system for multicast-based multimedia services and applications on the World Wide Web. We propose an end-to-end single-rate source-based multicast congestion control scheme (LE-SBCC) for reliable or unreliable multicast transport protocols. It addresses all the pieces of the single-rate multicast congestion control problem including drop-to-zero issues, TCP friendliness and RTT estimation. The scheme design consists of a cascaded set of filters and a rate-based additive-increase multiplicative-decrease (AIMD) module. These filters together transform the multicast tree to appear like a unicast path for the purposes of congestion control. Unlike TCP, the scheme is not self-clocked but acts upon a stream of loss indications (LIs) from receivers. These LIs are filtered to get a stream of loss events (LEs) (S. Floyd et al., in SIGCOMM 2000, Aug. 2000) (at most one per RTT per receiver). This LE stream is further filtered to extract the maximum LEs from any one receiver. Then the scheme effects at most one rate-reduction per round trip time (RTT). A range of results (simulation and experimental) is presented and compared against the mathematical model of the scheme components. Furthermore, we have successfully adapted TFRC (Op. cit) to our scheme, which is important to multimedia services desiring relatively stable rates over short time scales.  相似文献   

11.
The routing of traffic between Internet domains, or Autonomous Systems (ASes), a task known as interdomain routing, is currently handled by the Border Gateway Protocol (BGP, Rekhter and Li in RFC 4271 of the Internet Engineering Task Force, 2006). Using BGP, ASes can apply semantically rich routing policies to choose interdomain routes in a distributed fashion. This expressiveness in routing-policy choice supports domains?? autonomy in network operations and in business decisions, but it comes at a price: The interaction of locally defined routing policies can lead to unexpected global anomalies, including route oscillations or overall protocol divergence (see, e.g., Varadhan et?al. in Comput Networks 32(1):1?C16, 2000). Networking researchers have addressed this problem by devising constraints on policies that guarantee BGP convergence without unduly limiting expressiveness and autonomy (see, e.g., Gao and Rexford in IEEE/ACM Trans Network 9(6):681?C692, 2001; Griffin et?al. in Proceedings of 9th ACM Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication (SIGCOMM??03), pp. 61?C72. ACM Press, New York, 2003). In addition to taking this engineering or ??protocol- design?? approach, researchers have approached interdomain routing from an economic or ??mechanism-design?? point of view. It is known that lowest-cost-path (LCP) routing can be implemented in an incentive-compatible, BGP-compatible manner (Feigenbaum et?al. in Distribut. Comput 18(1):61?C72, 2005; Shneidman and Parkes in Proceedings of 23rd ACM Symposium on Principles of Distributed Computing (PODC??04), pp. 88?C97. ACM Press, New York, 2004) but that several other natural classes of policies cannot (Feigenbaum et?al. in Theor Comput Sci 378(2):175?C189, 2007; Feigenbaum et?al. in Distribut Comput 18(4):293?C305, 2006). In this paper, we present the first example of a class of interdomain-routing policies that is more general than LCP routing and for which BGP itself is both incentive-compatible and guaranteed to converge. We also present several steps toward a general theory of incentive-compatible, BGP-compatible interdomain routing.  相似文献   

12.
We investigate a generalization of the notion of XML security view introduced by Stoica and Farkas (Proceedings of the 16th International Conference on Data and Applications Security (IFIP’02). IFIP Conference Proceedings, vol. 256, pp. 133–146. Kluwer, Dordrecht, 2002) and later refined by Fan et al. (Proceedings of the ACM SIG- MOD International Conference on Management of Data (SIGMOD’04), pp. 587–598. ACM Press, New York, 2004). The model consists of access control policies specified over DTDs with XPath expressions for data-dependent access control. We provide the notion of security views characterizing information accessible to authorized users. This is a trans- formed DTD schema that can be used by users for query formulation. We develop an algorithm to materialize an authorized version of the document from the view and an algorithm to construct the view from an access control specification. We show that our view construction combined with materialization produces the same result as the direct application of the DTD access specification on the document. We also propose a number of generalizations of possible security policies and show how they affect view construction algorithm. Finally, we provide an evaluation of our system.  相似文献   

13.
Formal methods have been proved successful in analyzing different kinds of security protocols. They typically formalize and study the security guarantees provided by cryptographic protocols, when executed by a (possibly unbounded) number of different participants. A key problem in applying formal methods to cryptographic protocols, is the study of multi-protocol systems, where different protocols are concurrently executed. This scenario is particularly interesting in a global computing setting, where several different security services coexist and are possibly combined together. In this paper, we discuss how the tagging mechanism presented in [M. Bugliesi, R. Focardi, and M. Maffei. Compositional analysis of authentication protocols. In Proceedings of European Symposium on Programming (ESOP 2004), volume 2986 of Lecture Notes in Computer Science, pages 140–154. Springer-Verlag, 2004, M. Bugliesi, R.Focardi, and M.Maffei. A theory of types and effects for authentication. In ACM Proceedings of Formal Methods for Security Engineering: from Theory to Practice (FMSE 2004), pages 1–12. ACM Press, October 2004] addresses this issue.  相似文献   

14.
Quantum control plays a key role in quantum technology, in particular for steering quantum systems. As problem size grows exponentially with the system size, it is necessary to deal with fast numerical algorithms and implementations. We improved an existing code for quantum control concerning two linear algebra tasks: the computation of the matrix exponential and efficient parallelisation of prefix matrix multiplication.For the matrix exponential we compare three methods: the eigendecomposition method, the Padé method and a polynomial expansion based on Chebyshev polynomials. We show that the Chebyshev method outperforms the other methods both in terms of computation time and accuracy. For the prefix problem we compare the tree-based parallel prefix scheme, which is based on a recursive approach, with a sequential multiplication scheme where only the individual matrix multiplications are parallelised. We show that this fine-grain approach outperforms the parallel prefix scheme by a factor of 2–3, depending on parallel hardware and problem size, and also leads to lesser memory requirements.Overall, the improved linear algebra implementations not only led to a considerable runtime reduction, but also allowed us to tackle problems of larger size on the same parallel compute cluster.  相似文献   

15.
In this paper, we implement some notable hierarchical or decision-tree-based packet classification algorithms such as extended grid of tries (EGT), hierarchical intelligent cuttings (HiCuts), HyperCuts, and hierarchical binary search (HBS) on an IXP2400 network processor. By using all six of the available processing microengines (MEs), we find that none of these existing packet classification algorithms achieve the line speed of OC-48 provided by IXP2400. To improve the search speed of these packet classification algorithms, we propose the use of software cache designs to take advantage of the temporal locality of the packets because IXP network processors have no built-in caches for fast path processing in MEs. Furthermore, we propose hint-based cache designs to reduce the search duration of the packet classification data structure when cache misses occur. Both the header and prefix caches are studied. Although the proposed cache schemes are designed for all the dimension-by-dimension packet classification schemes, they are, nonetheless, the most suitable for HBS. Our performance simulations show that the HBS enhanced with the proposed cache schemes performs the best in terms of classification speed and number of memory accesses when the memory requirement is in the same range as those of HiCuts and HyperCuts. Based on the experiments with all the high and low locality packet traces, five MEs are sufficient for the proposed rule cache with hints to achieve the line speed of OC-48 provided by IXP2400.  相似文献   

16.
Recently, Nenadić et al. proposed a novel fair exchange protocol RSA-CEMD [A. Nenadić, N. Zhang, S. Barton. Fair certified e-mail delivery, Proceedings of the 9th ACM Symposium on Applied Computing (SAC 2004)-Computer Security Track, Nicosia, Cyprus, pp. 391–396, 2004] for certified e-mail delivery with an off-line and transparent trusted third party. The protocol provides non-repudiation of origin and non-repudiation of receipt security service to protect communicating parties from each other's false denials that the e-mail has been sent and received. In this paper, we show that Nenadić's protocol cannot achieve the claimed fairness. In the exchange protocol, the receiver can cheat the sender successfully by sending an invalid verifiable and recoverable encryption of signature (VRES) which can pass all the sender's verifications, as the VRES scheme proposed in [A. Nenadić, N. Zhang, S. Barton. Fair certified e-mail delivery, Proceedings of the 9th ACM Symposium on Applied Computing (SAC 2004)-Computer Security Track, Nicosia, Cyprus, pp. 391–396, 2004] is inherently unrecoverable in some situations. In other words, there is always that the receiver can get the sender's e-mail message while the sender cannot obtain receiver's receipt. Furthermore, we propose a revised version of certified e-mail delivery protocol that preserves strong fairness while remaining optimistic.  相似文献   

17.
Excessive buffer requirement to handle continuous-media playbacks is an impediment to cost- effective provisioning for on-line video retrieval. Given the skewed distribution of video popularity, it is expected that often there are concurrent playbacks of the same video file within a short time interval. This creates an opportunity to batch multiple requests and to service them with a single stream from the disk without violating the on-demand constraint. However, there is a need to keep data in memory between successive uses to do this. This leads to a buffer space trade-off between servicing a request in memory mode vs. servicing it in disk-mode. In this work, we develop a novel algorithm to minimize the buffer requirement to support a set of concurrent playbacks. One of the beauties of the proposed scheme is that it enables the server to dynamically adapt to the changing workload while minimizing the total buffer space requirement. Our algorithm makes a significant contribution in decreasing the total buffer requirement, especially when the user access pattern is biased in favor of a small set of files. The idea of the proposed scheme is modeled in detail using an analytical formulation, and optimality of the algorithm is proved. An analytical framework is developed so that the proposed scheme can be used in combination with various existing disk-scheduling strategies. Our simulation results confirm that under certain circumstances, it is much more resource efficient to support some of the playbacks in memory mode and subsequently the proposed scheme enables the server to minimize the overall buffer space requirement.  相似文献   

18.
19.
Unifying statistical texture classification frameworks   总被引:6,自引:0,他引:6  
The objective of this paper is to examine statistical approaches to the classification of textured materials from a single image obtained under unknown viewpoint and illumination. The approaches investigated here are based on the joint probability distribution of filter responses.

We review previous work based on this formulation and make two observations. First, we show that there is a correspondence between the two common representations of filter outputs—textons and binned histograms. Second, we show that two classification methodologies, nearest neighbour matching and Bayesian classification, are equivalent for particular choices of the distance measure. We describe the pros and cons of these alternative representations and distance measures, and illustrate the discussion by classifying all the materials in the Columbia-Utrecht (CUReT) texture database.

These equivalences allow us to perform direct comparisons between the texton frequency matching framework, best exemplified by the classifiers of Leung and Malik [Int. J. Comput. Vis. 43 (2001) 29], Cula and Dana [Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2001) 1041], and Varma and Zisserman [Proceedings of the Seventh European Conference on Computer Vision 3 (2002) 255], and the Bayesian framework most closely represented by the work of Konishi and Yuille [Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2000) 125].  相似文献   


20.
Membership determination of text strings has been an important procedure for analyzing textual data of a tremendous amount, especially when time is a crucial factor. Bloom filter has been a well-known approach for dealing with such a problem because of its succinct structure and simple determination procedure. As determination of membership with classification is becoming increasingly desirable, parallel Bloom filters are often implemented for facilitating the additional classification requirement. The parallel Bloom filters, however, tend to produce additional false-positive errors since membership determination must be performed on each of the parallel layers. We propose a scheme based on CMAC, a neural network mapping, which only requires a single-layer calculation to simultaneously obtain information of both the membership and classification. A hash function specifically designed for text strings is also proposed. The proposed scheme could effectively reduce false-positive errors by converging the range of membership acceptance to the minimum for each class during the neural network mapping. Simulation results show that the proposed scheme committed significantly less errors than the benchmark, parallel Bloom filters, with limited and identical memory usage at different classification levels.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号