首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
The virtual network embedding/ mapping problem is an important issue in network virtualization in Software-Defined Networking (SDN). It is mainly concerned with mapping virtual network requests, which could be a set of SDN flows, onto a shared substrate network automatically and efficiently. Previous researches mainly focus on developing heuristic algorithms for general topology virtual network. In practice however, the virtual network is usually generated with specific topology for specific purpose. Thus, it is a challenge to optimize the heuristic algorithms with these topology information. In order to deal with this problem, we propose a topology-cognitive algorithm framework, which is composed of a guiding principle for topology algorithm developing and a compound algorithm. The compound algorithm is composed of several sub- algorithms, which are optimized for specific topologies. We develop star, tree, and ring topology algorithms as examples, other sub- algorithms can be easily achieved following the same framework. The simulation results show that the topology-cognitive algorithm framework is effective in developing new topology algorithms, and the developed compound algorithm greatly enhances the performance of the Revenue/Cost (R/C) ratio and the Runtime than traditional heuristic algorithms for multi-topology virtual network embedding problem.  相似文献   

2.
Wearable smart devices, such as smart watch, wristband are becoming increasingly popular recently. They generally integrate the MEMS-designed inertial sensors, including accelerometer, gyroscope and compass, which provide a convenient and inexpensive way to collect motion data of users. Such rich, continuous motion data provide great potential for remote healthcare and decease diagnosis. Information processing algorithms play the critical role in these approaches, which is to extract the motion signatures and to access different kinds of judgements. This paper reviews key algorithms in these areas. In particular, we focus on three kinds of applications: 1) gait analysis; 2) fall detection and 3) sleep monitoring. They are the most popular healthcare applications based on the inertial data. By categorizing and introducing the key algorithms, this paper tries to build a clear map of how the inertial data are processed; how the inertial signatures are defined, extracted, and utilized in different kinds of applications. This will provide a valuable guidance for users to understand the methodologies and to select proper algorithm for specifi c application purpose.  相似文献   

3.
In this paper, a new multiclass classification algorithm is proposed based on the idea of Locally Linear Embedding (LLE), to avoid the defect of traditional manifold learning algorithms, which can not deal with new sample points. The algorithm defines an error as a criterion by computing a sample’s reconstruc-tion weight using LLE. Furthermore, the existence and characteristics of low dimensional manifold in range-profile time-frequency information are explored using manifold learning algorithm, aiming at the problem of target recognition about high range resolution MilliMeter-Wave (MMW) radar. The new algo-rithm is applied to radar target recognition. The experiment results show the algorithm is efficient. Com-pared with other classification algorithms, our method improves the recognition precision and the result is not sensitive to input parameters.  相似文献   

4.
Many existing image annotation algorithms work under probabilistic modeling mechanism. In this paper, we formulate the problem as a variation of supervised learning task and propose an Improved Citation kNN (ICKNN) Multiple-instance learning (MIL) algorithm for automatic image annotation. In contrast with the existing MIL based image annotation algorithm which intends to learn an explicit correspondence between image regions and keywords, here we annotate the keywords on the entire image instead of its regions. Concretely, we first explore the concept of Confidence weight (CW) for every training bag (image) to reflect the relevance extent between a bag and a semantic keyword. It can be treated as a stage of re-ranking on training set before an- notation starts. Moreover, a modified hausdoriT distance is adopted for the ICKNN algorithm to solve the automatic annotation problem. The proposed annotation approach demonstrates a promising performance over 5,000 images from COREL dataset, as compared with some current algorithms in the literature.  相似文献   

5.
Most existing algorithms for the underdetermined blind source separation (UBSS) problem are two-stage algorithm, i.e., mixing parameters estimation and sources estimation. In the mixing parameters estimation, the previously proposed traditional clustering algorithms are sensitive to the initializations of the mixing parameters. To reduce the sensitiveness to the initialization, we propose a new algorithm for the UBSS problem based on anechoic speech mixtures by employing the visual information, i.e., the interaural time difference (ITD) and the interaural level difference (ILD), as the initializations of the mixing parameters. In our algorithm, the video signals are utilized to estimate the distances between microphones and sources, and then the estimations of the ITD and ILD can be obtained. With the sparsity assumption in the time-frequency domain, the Gaussian potential function algorithm is utilized to estimate the mixing parameters by using the ITDs and ILDs as the initializations of the mixing parameters. And the time-frequency masking is used to recover the sources by evaluating the various ITDs and ILDs. Experimental results demonstrate the competitive performance of the proposed algorithm compared with the baseline algorithms.  相似文献   

6.
In this paper, the IHSL transform and the Fuzzy C-Means (FCM) segmentation algorithm are combined together to perform the unsupervised classification for fully polarimetric Synthetic Aperture Rader (SAR) data. We apply the IHSL colour transform to H/α/SPAN space to obtain a new space (RGB colour space) which has a uniform distinguishability among inner parameters and contains the whole polarimetric information in H/α/SPAN. Then the FCM algorithm is applied to this RGB space to finish the classification procedure. The main advantages of this method are that the parameters in the color space have similar interclass distinguishability, thus it can achieve a high performance in the pixel based segmentation algorithm, and since we can treat the parameters in the same way, the segmentation procedure can be simplified. The experiments show that it can provide an improved classification result compared with the method which uses the H/α/SPAN space directly during the segmentation procedure.  相似文献   

7.
A multi-parameter signal sorting algorithm for interleaved radar pulses in dense emitter environment is presented. The algorithm includes two parts, pulse classification and pulse repetition interval (PRI) analysis. Firstly, we propose the dynamic distance clustering (DDC) for classification. In the clustering algorithm, the multi-dimension features of radar pulse are used for reliable classification. The similarity threshold estimation method in DDC is derived, which contributes to the efficiency of the algorithm. However, DDC has large computation with many signal pulses. Then, in order to sort radar signals in real time, the improved DDC (IDDC) algorithm is proposed. Finally, PRI analysis is adopted to complete the process of sorting. The simulation experiments and hardware implementations show both algorithms are effective.  相似文献   

8.
The quality, quantity, and consistency of the knowledge used in GO-playing programs often determine their strengths, and automatic acquisition of large amounts of high-quality and consistent GO knowledge is crucial for successful GO playing. In a previous article of this subject, we have presented an algorithm for efficient and automatic acquisition of spatial patterns of GO as well as their frequency of occurrence from game records. In this article, we present two algorithms, one for efficient and automatic acquisition of pairs of spatial patterns that appear jointly in a local context, and the other for deter- mining whether the joint pattern appearances are of certain significance statistically and not just a coincidence. Results of the two algorithms include 1 779 966 pairs of spatial patterns acquired automatically from 16 067 game records of professsional GO players, of which about 99.8% are qualified as pattern collocations with a statistical confidence of 99.5% or higher.  相似文献   

9.
The computation of Chebyshev polynomial over finite field is a dominating operation for a public key cryptosystem.Two generic algorithms with running time of have been presented for this computation:the matrix algorithm and the characteristic polynomial algorithm,which are feasible but not optimized.In this paper,these two algorithms are modified in procedure to get faster execution speed.The complexity of modified algorithms is still,but the number of required operations is reduced,so the execution speed is improved.Besides,a new algorithm relevant with eigenvalues of matrix in representation of Chebyshev polynomials is also presented,which can further reduce the running time of that computation if certain conditions are satisfied.Software implementations of these algorithms are realized,and the running time comparison is given.Finally an efficient scheme for the computation of Chebyshev polynomial over finite field is presented.  相似文献   

10.
Aiming at the limitation of the traditional algorithm of unsharp masking for image enhancement, we deduce the function to get the coefficient for controlling image enhancement based on the unsharp masking and Gaussian operator. Then we propose a novel adaptive algorithm for real-time image enhancement, and analyze the way for realizing the algorithm by hardware. At last contradistinctive experimental results are given of the algorithm and other algorithms for image enhancement. The results show that we can enhance detail regions of image to the greatest extent, avoid excessive overshoot in edge regions, and minimize the degree of enhancement in uniform regions. A pleasant result is achieved by using the proposed algorithm. Furthermore, it can be realized simply by hardware. Accordingly, we can enhance image effectively to satisfy real-time demand.  相似文献   

11.
Byte stuffing is a process that encodes a sequence of data bytes that may contain “illegal” or “reserved” values, using a potentially longer sequence that contains no occurrences of these values. The extra length is referred to here as the overhead of the encoding. To date, byte stuffing algorithms, such as those used by SLIP, PPP, and AX.25, have been designed to incur low average overhead, but little effort has been made to minimize their worst-case overhead. However, there are some increasingly popular network devices whose performance is determined more by the worst case than by the average case. For example, the transmission time for ISM-band packet radio transmitters is strictly limited by FCC regulation. To adhere to this regulation, the current practice is to set the maximum packet size artificially low so that no packet, even after worst-case overhead, can exceed the transmission time limit. This paper presents a new byte stuffing algorithm, called consistent overhead byte stuffing (COBS), which tightly bounds the worst-case overhead. It guarantees, in the worst case, to add no more than 1 byte in 254 to any packet. For large packets, this means that their encoded size is no more than 100.4% of their pre-encoding size. This is much better than the 200% worst-case bound that is common for many byte stuffing algorithms, and is close to the information-theoretic limit of about 100.07%. Furthermore, the COBS algorithm is computationally cheap, and its average overhead is very competitive with that of existing algorithms  相似文献   

12.
宁卓  孙知信  龚俭  张维维 《电子学报》2012,40(3):530-537
 本文结合流量的动态特征和入侵检测系统规则库的静态特征生成高性能报文分类树,提出了一个新的面向骨干网高速入侵检测的报文分类算法FlowCopySearch(FCS).改进在于:①从流量的新角度提出了最优分类树定义并引入分类域熵衡量每个分类域对于流量的分类能力;②将传统分类算法中每个报文都必须频繁执行的内存拷贝操作简化为每个流只执行一次内存拷贝操作,克服了报文分类算法的瓶颈.实验结果表明FCS更适用于骨干网大流量trace的报文分类,较之两种经典分类算法,分类速度提高了10.1%~45.1%,同时存储消耗降低了11.1%~36.6%.  相似文献   

13.
Algorithms for packet classification   总被引:5,自引:0,他引:5  
Gupta  P. McKeown  N. 《IEEE network》2001,15(2):24-32
The process of categorizing packets into “flows” in an Internet router is called packet classification. All packets belonging to the same flow obey a predefined rule and are processed in a similar manner by the router. For example, all packets with the same source and destination IP addresses may be defined to form a flow. Packet classification is needed for non-best-effort services, such as firewalls and quality of service; services that require the capability to distinguish and isolate traffic in different flows for suitable processing. In general, packet classification on multiple fields is a difficult problem. Hence, researchers have proposed a variety of algorithms which, broadly speaking, can be categorized as basic search algorithms, geometric algorithms, heuristic algorithms, or hardware-specific search algorithms. In this tutorial we describe algorithms that are representative of each category, and discuss which type of algorithm might be suitable for different applications  相似文献   

14.
Traditional packet classification for IPv4 involves examining standard 5-tuple of a packet header, source address, destination address, source port, destination port and protocol. With introduction of IPv6 flow label field which entails labeling the packets belonging to the same flow, packet classification can be resolved based on 3 dimensions: flow label, source address and destination address. In this paper, we propose a novel approach for the 3-tuple packet classification based on flow label. Besides, by introducing a conversion engine to covert the source-destination pairs to the compound address prefixes, we put forward an algorithm called Reducing Dimension (RD) with dimension reduction capability, which combines heuristic tree search with usage of buckets. And we also provide an improved version of RD, called Improved RD (IRD), which uses two mechanisms: path compression and priority tag, to optimize the performance. To evaluate our algorithm, extensive experiments have been conducted using a number of synthetically generated databases. For the memory consumption, the two proposed new algorithms only consumes around 3% of the existing algorithms when the number of filters increases to 10 k. And for the average search time, the search time of the two proposed algorithms is more than four times faster than others when the number of filters is 10 k. The results show that the proposed algorithm works well and outperforms many typical existing algorithms with the dimension reduction capability.  相似文献   

15.
Capacity has been an important issue for many wireless backhaul networks. Both the multihop nature and the large per packet channel access overhead can lead to its low channel efficiency. The problem may get even worse when there are many applications transmitting packets with small data payloads, e.g., Voice over Internet Protocol (VoIP). Previously, the use of multiple parallel channels and employing packet concatenation were treated as separate solutions to these problems. However, there is no available work on the integrated design and performance analysis of a complete scheduler architecture combining these two schemes. In this paper, we propose a scheduler that concatenates small packets into large frames and sends them through multiple parallel channels with an intelligent channel selection algorithm between neighboring nodes. Besides the expected capacity improvements, we also derive delay bounds for this scheduler. Based on the delay bound formula, call admission control (CAC) of a broad range of scheduling algorithms can be obtained. We demonstrate the significant capacity and resequencing delay improvements of this novel design with a voice-data traffic mixing example, via both numerical and simulation results. It is shown that the proposed packet concatenation and channel selection algorithms greatly outperform the round-robin scheduler in a multihop scenario.  相似文献   

16.
Classical Transmission Control Protocol (TCP) designs have never considered the identity of the competing transport protocol as useful information to TCP sources in congestion control mechanisms. When competing against a TCP flow on a bottleneck link, a User Datagram Protocol (UDP) flow can unfairly occupy the entire link bandwidth and suffocate all TCP flows on the link. If it were possible for a TCP source to know the type of transport protocol that deprives it of link access, perhaps it would be better for the TCP source to react in a way which prevents total starvation. In this paper, we use coefficient of variation and power spectral density of throughput traces to identify the presence of UDP transport protocols that compete against TCP flows on bottleneck links. Our results show clear traits that differentiate the presence of competing UDP flows from TCP flows independent of round-trip times variations. Signatures that we identified include an increase in coefficient of variation whenever a competing UDP flow joins the bottleneck link for the first time, noisy spectral density representation of a TCP flow when competing against a UDP flow in the bottleneck link, and a dominant frequency with outstanding power in the presence of TCP competition only. In addition, the results show that signatures for congestion caused by competing UDP flows are different from signatures due to congestion caused by competing TCP flows regardless of their round-trip times. The results in this paper present the first steps towards development of more ’intelligent’ congestion control algorithms with added capability of knowing the identity of aggressor protocols against TCP, and subsequently using this additional information for rate control.  相似文献   

17.
As the widespread employment of firewalls on the Internet, user datagram protocol (UDP) based voice over Internet protocol (VoIP) system will be unable to transmit voice data. This paper proposed a novel method to transmit voice data based on transmission control protocol (TCP). The method adopts a disorder TCP transmission strategy, which allows discontinuous data packets in TCP queues read by application layer directly without waiting for the retransmission of lost data packets. A byte stream data boundary identification algorithm based on consistent overhead byte stuffing algorithm is designed to efficiently identify complete voice data packets from disordered TCP packets arrived so as to transmit the data to the audio processing module timely. Then, by implementing the prototype system and testing, we verified that the proposed algorithm can solve the high time delay, jitter and discontinuity problems in standard TCP protocol when transmitting voice data packets, which caused by its error control and retransmission mechanism. We proved that the method proposed in this paper is effective and practical.  相似文献   

18.
区域分割包分类算法的优化实现   总被引:4,自引:0,他引:4  
包分类就是根据到达数据包的包头信息将包按一定规则进行分类的过程,包分类技术是下一代路由器、防火墙、QoS保证机制实现、网络信息检测等设备的关键技术。区域分割包分类算法是目前多种分类算法中较为有效的算法之一。根据给定分类规则集的特点对算法进行优化实现是区域分割包分类算法的核心研究内容,它包括高效率的区域优化分割准则和在分割后小区域内的单域化线性查找两部分。优化实现不仅保证算法具有良好的时间和空间性能,而且极大地降低了规则数增加对算法性能的影响。仿真实验结果表明区域分割包分类算法在一定规则数范围内每秒能处理3-6M个IP包头,具有O(d)的时间复杂度(d为域的个数)和O(dN)的空间复杂度(N为规则数)。区域分割包分类算法支持规则集的实时更新。  相似文献   

19.
Ons Jelassi  Olivier Paul 《电信纪事》2007,62(11-12):1388-1400
Packet classification is a central function in filtering systems such as firewalls or intrusion detection mechanisms. Several mechanisms for fast packet classification have been proposed. But, existing algorithms are not always scalable to large filters databases in terms of search time and memory storage requirements. In this paper, we present a novel multifields packet classification algorithm based on an existing algorithm called Pacars and we show its advantages compared to previously proposed algorithms. We give performance measurements using a publicly available benchmark developed at Washington University. We show how our algorithm offers improved search times without any limitation in terms of incremental updates.  相似文献   

20.
Buffers in emerging optical packet routers are expensive resources, and it is expected that they would be able to store at most a few tens of KiloBytes of data in the optical domain. When TCP and real-time (UDP) traffic multiplex at an optical router with such small buffers (less than 50 KB), we recently showed that UDP packet loss can increase with increasing buffer size. This anomalous loss behaviour can negatively impact the investment made in larger buffers and degrade quality of service. In this paper we explore if this anomalous behaviour can be alleviated by dedicating (i.e., pre-allocating) buffers to UDP traffic. Our contributions within this context are two fold. First, we show using extensive simulations that there would seem to be a critical buffer size above which UDP benefits with dedicated buffers that protect it from the aggressive nature of TCP. However, for smaller buffers that are below this critical value, UDP can benefit by time-sharing the buffers with TCP. Second, we develop a simple linear model that quantitatively captures the combined utility of TCP and real-time traffic for shared and dedicated buffers, and propose a method to optimise the buffer split ratio with the objective of maximising the overall network utility. Our study equips designers of optical packet switched networks with quantitative tools to tune their buffer allocation strategies subject to various system parameters such as the ratio of traffic mix and relative weights associated with TCP and UDP traffic.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号