首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
目的 传统跟踪算法在复杂环境下容易发生漂移(drift)现象,通过改进TLD(tracking learning detection)跟踪技术算法,提出了基于Sliding-window的局部搜索和全局搜索策略、积分直方图过滤器和随机Haar-like块特征过滤器。方法 首先,采用积分直方图过滤器可以有效地过滤大量非目标子窗口块,从而减少后续过滤器特征匹配数;其次,利用随机Haar-like块特征过滤器能够解决跟踪算法在复杂环境(多物体、部分或较大区域遮挡、快速运动等)跟踪过程易发生漂移而导致跟踪精度的不足。结果 结合TLD原始过滤器与新提出的两个过滤器组合而成的级联分类器,通过与主流的跟踪算法实验进行对比表明,级联分类器在稳定的背景或复杂环境的跟踪鲁棒性强、跟踪精度高,并且采用了局部和全局搜索策略提高了计算速度。结论 提出的方法在诸多背景环境变化,跟踪物体形变等情况下,能够精确地多尺度跟踪待测目标;结合全局和局部搜索跟踪策略能够有效地克服级联分类器所带来的时间复杂度过高的问题,从而实现实时目标跟踪。  相似文献   

2.
刘思  刘海  陈启买  贺超波 《计算机应用》2017,37(8):2234-2239
现有的基于随机游走链路预测指标在无权网络上的转移过程存在较强随机性,没有考虑在网络结构上不同邻居节点间的相似性对转移概率的作用。针对此问题,提出一种基于网络表示学习与随机游走的链路预测算法。首先,通过基于深度学习的网络表示学习算法——DeepWalk学习网络节点的潜在结构特征,将网络中的各节点表征到低维向量空间;然后,在重启随机游走(RWR)和局部随机游走(LRW)算法的随机游走过程中融合各邻居节点在向量空间上的相似性,重新定义出邻居节点间的转移概率;最后,在5个真实数据集上进行大量实验验证。实验结果表明:相比8种具有代表性的基于网络结构的链路预测基准算法,所提算法链路预测结果的AUC值均有提升,最高达3.34%。  相似文献   

3.
为提高数字图像隐写中的隐藏信息容量,提出了一种基于双伪随机数的图像隐写算法.首先介绍了伪随机数生成和双像素嵌入信息原理,然后将随机产生的整数伪随机数看作一个辅助像素值,结合载体图像中的一个像素值将两位秘密信息同时嵌入到一个载体图像像素中,从而在像素改变较小的情况下提高隐藏容量.最后分析了算法的嵌入性能,并通过仿真实验证明,该方法在提高隐藏信息容量的同时,也具有很好的安全性能.  相似文献   

4.
提出了一种基于Rijndael分组密码的伪随机数发生器,称为Rijndael PRNG。安全性分析、伪随机特性测试以及相空间重构分析表明,Rijndael PRNG具有方法简单、安全性高、伪随机性能良好、硬件实现容易等特点,为实际应用提供了一种新的伪随机数发生器方案。  相似文献   

5.
Cost of testing activities is a major portion of the total cost of a software. In testing, generating test data is very important because the efficiency of testing is highly dependent on the data used in this phase. In search-based software testing, soft computing algorithms explore test data in order to maximize a coverage metric which can be considered as an optimization problem. In this paper, we employed some meta-heuristics (Artificial Bee Colony, Particle Swarm Optimization, Differential Evolution and Firefly Algorithms) and Random Search algorithm to solve this optimization problem. First, the dependency of the algorithms on the values of the control parameters was analyzed and suitable values for the control parameters were recommended. Algorithms were compared based on various fitness functions (path-based, dissimilarity-based and approximation level + branch distance) because the fitness function affects the behaviour of the algorithms in the search space. Results showed that meta-heuristics can be effectively used for hard problems and when the search space is large. Besides, approximation level + branch distance based fitness function is generally a good fitness function that guides the algorithms accurately.  相似文献   

6.
Random walk is one of the widely used techniques for information discovery in unstructured networks like Ad hoc Wireless Networks (AWNs) and Wireless Sensor Networks (WSNs). In a random walk, instead of taking all steps uniform randomly, our idea is to modify the random walk to take some level biased steps to improve its energy efficiency and latency which are important design parameters of protocols for WSNs. The level of a node is defined as the minimum number of hops in which it can reach the sink node. We propose three protocols viz., Several Short Random Walks (SSRW) search, Random Walk with Level Biased Jumps (RWLBJ) search, and Level Biased Random Walk (LBRW) search. The proposed protocols use a combination of random and level biased steps to search for the target information. As we move from SSRW to LBRW, the percentage of biased steps increases and the percentage of random steps decreases i.e., SSRW uses fewer biased steps compared to the other proposals, LBRW uses only biased steps, and the usage of biased steps in RWLBJ is somewhere in between. We show by extensive simulations and testbed experiments that SSRW, RWLBJ, and LBRW are better choices compared to that of a pure Random walk in terms of the energy consumption and latency of search, and we also show that among the proposed protocols, LBRW and RWLBJ are the best.  相似文献   

7.
ContextGenetic algorithm (GA) is an important intelligent method in the area of automatic software test data generation. However, existing GAs tend to get trapped in the local optimal solution, leading to population aging, which can significantly reduce the benefits of GA-based software testing and increase cost and effort. Although much attention has been focused on solving this problem by improving chromosome population, genetic operations, and genetic parameters adjustment, the applicability of most of the algorithms proposed is often narrow because of the complex operations involved and nondeterminism inherited from traditional GAs.ObjectivesThis paper proposes a new algorithm called the regenerate genetic algorithm (RGA), which is based on a new simple, stable, and easy-to-implement regeneration strategy that involves judging the population aging process.MethodsWe propose a new regeneration strategy—called Regenerate Genetic Algorithm (RGA)—that solves these problems easily and effectively. The proposed strategy defines population aging factors and process in order to determine the degree of population aging. Subsequently, when population aging has reached a certain limit, a population regeneration operation is triggered. In contrast to other improved methods, the proposed regeneration strategy for population aging easily achieves a stronger ability to jump out of the local optimal solution, thereby preventing population aging and effectively improving test coverage, without modifying any parameter of the original GA.ResultsThe proposed algorithm is experimentally evaluated by comparing it to the basic GA, Random Testing (RT) and several other methods in terms of both efficiency and effectiveness on the Siemens Suite of test programs and a more complex real program. The results obtained indicate that the proposed algorithm can effectively increase search efficiency, restrain population aging, increase test coverage, and reduce the number of test cases.ConclusionRGA has better optimization ability than the conventional algorithms, especially for large-scale and highly complex programs.  相似文献   

8.
随着社交网络的飞速发展引起了人们对推荐系统(RS)的广泛关注。针对社交网络中现有推荐方法仍存在冷启动问题以及未考虑用户所处的社交网络信息的情况,提出了在信任社交网络中基于图熵的个性化推荐算法(PRAGE)。首先,根据用户物品和它们之间的反馈信息建立用户物品图(UIG),同时引入信任机制建立用户信任图(UTG);其次,通过对两个图使用随机游走算法得到用户与物品的初始相似度和基于信任机制的新的用户物品相似度;重复随机游走过程直至相似度稳定到收敛值;然后,使用UIG和UTG的图熵对两组相似度进行加权并最终相应地得出目标用户的最终推荐列表。在真实的数据集Epinions和FilmTrust上的实验结果表明,相比经典的基于随机游走算法,PRAGE的精确率分别提高了34.7%和19.4%,召回率分别提高了28.9%和21.1%,能够有效地缓解推荐的冷启动问题且在精确率和覆盖率指标上均优于对比算法。  相似文献   

9.
水下传感网中实现多个移动目标的协同追踪任是一个技术难题,针对这个问题论文提出了一种分布式的多目标有向路径覆盖增强算法。在实际的三维水下传感网中,水下传感器节点会随着水流运动而移动,被追踪的目标具有自主行动能力。假设移动规律遵从Meandering Current Mobility模型,多个运动目标沿着基于概率的Random Walk移动模型轨迹运动。在论文提出的算法中,覆盖运动路径的传感器节点通过两跳邻居节点范围内的水下传感器节点协同决策来实现最大化路径覆盖,通过调整水下传感器节点自身的有向覆盖方向达到多目标轨迹路径有向覆盖率平均值最大,并使多目标轨迹路径有向覆盖率标准方差尽量小。最后论文通过MATLAB软件仿真来验证分布式覆盖增强算法的有效性,能够显著提高多个移动目标的路径覆盖率。  相似文献   

10.
Abstract

Genetic algorithms are powerful generalized search techniques. This paper shows that genetic algorithms can solve a difficult class of problems in general systems theory quickly and efficiently. Genetic algorithms appear to be ideally suited to solving the combinatorially complex problem of behavior analysis. The search space of behavior analysis experiences exponential growth as a function of the number of variables. The genetic algorithm converges after considering a small percentage of these potential solutions. The number of solutions that need to be examined by the genetic algorithm seems to be a polynomial function of the number of variables and, in fact, the growth appears to be linear  相似文献   

11.
Abstract

Random errors increase with the increase in word length. The random errors which are spread over the word in blocks of suitable lengths are defined as phased errors. Correction of phased errors is considered, and special types of codes correcting double phased errors that are perfect in the sense that they correct all single and double phased errors and no others ate systematically defined and studied. Decoding algorithms, nonbinary extensions, modifications, weight enumerators and generalizations for correcting more than two phased errors are also investigated.  相似文献   

12.
陶涛  张云 《中国图象图形学报》2015,20(12):1639-1651
目的 当前国际流行的SIFT算法及其改进算法在检测与描述特征点时基于高斯差分函数,存在损失图像高频信息的缺陷,从而导致图像匹配时其性能随着图像变形的增加而出现急剧下降。针对SIFT算法及其改进算法的这一缺陷,本研究提出了一种新的无图像信息损失的、在对数极坐标系下的尺度不变特征点检测与描述算法。方法 本研究提出的尺度不变特征点检测与描述算法首先将直角坐标系下以采样点为中心的圆形图块转换为对数极坐标系下的矩形图块,并以此矩形图块为基础对采样点进行特征点检测与描述符提取;该算法使用固定宽度的窗口在采样点的对数极坐标径向梯度图像的logtr轴上进行移动以判断该点是否为特征点并计算该点的特征尺度,并在具有局部极大窗口响应的特征尺度位置处提取特征点的描述符。该算法的描述符基于对数极坐标系下的矩形图块的灰度梯度的幅值与角度,是一个192维向量,并具有对于尺度、旋转、光照等变化的不变性。结果 本研究采用INRIA数据组和Mikolajczyk提出的匹配性能指标对SIFT算法、SURF算法和提出的尺度不变特征点检测与描述算法进行比较。与SIFT算法和SURF算法相比,提出的尺度不变特征点检测与描述算法在对应点数、重复率、正确匹配点数和匹配率等方面均具有一定优势。结论 提出了一种基于对数极坐标系的图像匹配算法,即将直角坐标系下以采样点为中心的圆形图块转换为对数极坐标系下的矩形图块,这样在特征点的检测过程中,可以有效规避SIFT算法因为采用DoG函数而造成的高频信息损失;在描述符提取过程中,对数极坐标系可以有效地减少图像的变化量,从而提高了匹配性能。  相似文献   

13.
基于广义超曲面树的相似性搜索算法   总被引:2,自引:0,他引:2  
张兆功  李建中 《软件学报》2002,13(10):1969-1976
相似性搜索是数据挖掘的主要领域之一.它在数据库中检索出相似的数据,发现数据间的相似性.它可以应用于图像数据库、空间数据库和时间序列分析.对于欧氏空间(一种特殊的度量空间),相似性搜索算法中基于R-tree的方法,在低维时是高效的,当维数增加时,R-tre e的方法将退化为线性扫描.该现象被称为维数灾难(dimensionality curse),主要原因是存在数据重复.当数据量很大且维数很高时,距离计算和I/O操作将非常费时.提出了度量空间上新的空间分割方法和索引结构rgh-tree,利用数据库的数据对象与很少几个固定参考对象的距离信息进行数据分割和分布,产生一个各节点没有数据重复的平衡树.另外,在rgh-tree的基础上提出了相应的相似性搜索算法,该算法具有较小的I/O代价和距离计算次数,平均复杂性近似为o(n0.58).解决了目前算法存在的一些问题.  相似文献   

14.
In this work we explore three modifications to the architecture of a Data-Driven VLSI array of processors, previously introduced in Koren et al. (IEEE Computer21, 10 (Oct. 1988), 30-43). These modifications are geared toward improving the array utilization, as well as the performance of the mapped algorithms. The first modification considered increases the internal parallelism present in each array processing element, allowing it to simultaneously execute two instructions. A second modification improves the connectivity between processing elements by adding wires and switches between these elements. The third modification creates blocks of processing elements, featuring tighter coupling and faster communication, to take advantage of algorithm locality. Each change is evaluated by asserting its impact on the mapping of algorithms onto the modified array.  相似文献   

15.
ABSTRACT

Regardless of the performance of gravitational search algorithm (GSA), it is nearly incapable of avoiding local optima in high-dimension problems. To improve the accuracy of GSA, it is necessary to fine tune its parameters. This study introduces a gravitational search algorithm based on learning automata (GSA-LA) for optimisation of continuous problems. Gravitational constant G(t) is a significant parameter that is used to adjust the accuracy of the search. In this work, learning capability is utilised to select G(t) based on spontaneous reactions. To measure the performance of the introduced algorithm, numerical analysis is conducted on several well-designed test functions, and the results are compared with the original GSA and other evolutionary-based algorithms. Simulation results demonstrate that the learning automata-based gravitational search algorithm is more efficient in finding optimum solutions and outperforms the existing algorithms.  相似文献   

16.

This paper describes two new suboptimal mask search algorithms for Fuzzy inductive reasoning (FIR), a technique for modelling dynamic systems from observations of their input/output behaviour. Inductive modelling is by its very nature an optimisation problem. Modelling large-scale systems in this fashion involves solving a high-dimensional optimisation problem, a task that invariably carries a high computational cost. Suboptimal search algorithms are therefore important. One of the two proposed algorithms is a new variant of a directed hill-climbing method. The other algorithm is a statistical technique based on spectral coherence functions. The utility of the two techniques is demonstrated by means of an industrial example. A garbage incinerator process is inductively modelled from observations of 20 variable trajectories. Both suboptimal search algorithms lead to similarly good models. Each of the algorithms carries a computational cost that is in the order of a few percent of the cost of solving the complete optimisation problem. Both algorithms can also be used to filter out variables of lesser importance, i.e. they can be used as variable selection tools.  相似文献   

17.
There is a commonly held opinion that the algorithms for learning unrestricted types of Bayesian networks, especially those based on the score+search paradigm, are not suitable for building competitive Bayesian network-based classifiers. Several specialized algorithms that carry out the search into different types of directed acyclic graph (DAG) topologies have since been developed, most of these being extensions (using augmenting arcs) or modifications of the Naive Bayes basic topology. In this paper, we present a new algorithm to induce classifiers based on Bayesian networks which obtains excellent results even when standard scoring functions are used. The method performs a simple local search in a space unlike unrestricted or augmented DAGs. Our search space consists of a type of partially directed acyclic graph (PDAG) which combines two concepts of DAG equivalence: classification equivalence and independence equivalence. The results of exhaustive experimentation indicate that the proposed method can compete with state-of-the-art algorithms for classification.Editors: Pedro Larrañaga, Jose A. Lozano, Jose M. Peña and Iñaki Inza  相似文献   

18.
目的 海量图像检索技术是计算机视觉领域研究热点之一,一个基本的思路是对数据库中所有图像提取特征,然后定义特征相似性度量,进行近邻检索。海量图像检索技术,关键的是设计满足存储需求和效率的近邻检索算法。为了提高图像视觉特征的近似表示精度和降低图像视觉特征的存储空间需求,提出了一种多索引加法量化方法。方法 由于线性搜索算法复杂度高,而且为了满足检索的实时性,需把图像描述符存储在内存中,不能满足大规模检索系统的需求。基于非线性检索的优越性,本文对非穷尽搜索的多索引结构和量化编码进行了探索新研究。利用多索引结构将原始数据空间划分成多个子空间,把每个子空间数据项分配到不同的倒排列表中,然后使用压缩编码的加法量化方法编码倒排列表中的残差数据项,进一步减少对原始空间的量化损失。在近邻检索时采用非穷尽搜索的策略,只在少数倒排列表中检索近邻项,可以大大减少检索时间成本,而且检索过程中不用存储原始数据,只需存储数据集中每个数据项在加法量化码书中的码字索引,大大减少内存消耗。结果 为了验证算法的有效性,在3个数据集SIFT、GIST、MNIST上进行测试,召回率相比近几年算法提升4%~15%,平均查准率提高12%左右,检索时间与最快的算法持平。结论 本文提出的多索引加法量化编码算法,有效改善了图像视觉特征的近似表示精度和存储空间需求,并提升了在大规模数据集的检索准确率和召回率。本文算法主要针对特征进行近邻检索,适用于海量图像以及其他多媒体数据的近邻检索。  相似文献   

19.

In machine learning, searching for the optimal feature subset from the original datasets is a very challenging and prominent task. The metaheuristic algorithms are used in finding out the relevant, important features, that enhance the classification accuracy and save the resource time. Most of the algorithms have shown excellent performance in solving feature selection problems. A recently developed metaheuristic algorithm, gaining-sharing knowledge-based optimization algorithm (GSK), is considered for finding out the optimal feature subset. GSK algorithm was proposed over continuous search space; therefore, a total of eight S-shaped and V-shaped transfer functions are employed to solve the problems into binary search space. Additionally, a population reduction scheme is also employed with the transfer functions to enhance the performance of proposed approaches. It explores the search space efficiently and deletes the worst solutions from the search space, due to the updation of population size in every iteration. The proposed approaches are tested over twenty-one benchmark datasets from UCI repository. The obtained results are compared with state-of-the-art metaheuristic algorithms including binary differential evolution algorithm, binary particle swarm optimization, binary bat algorithm, binary grey wolf optimizer, binary ant lion optimizer, binary dragonfly algorithm, binary salp swarm algorithm. Among eight transfer functions, V4 transfer function with population reduction on binary GSK algorithm outperforms other optimizers in terms of accuracy, fitness values and the minimal number of features. To investigate the results statistically, two non-parametric statistical tests are conducted that concludes the superiority of the proposed approach.

  相似文献   

20.
ABSTRACT

Grey Wolf Optimiser (GWO) is a recently developed optimisation approach to solve complex non-linear optimisation problems. It is relatively simple and leadership-hierarchy based approach in the class of Swarm Intelligence based algorithms. For solving complex real-world non-linear optimisation problems, the search equation provided in GWO is not of sufficient explorative behaviour. Therefore, in the present paper, an attempt has been made to increase the exploration capability along with the exploitation of a search space by proposing an improved version of classical GWO. The proposed algorithm is named as Cauchy-GWO. In Cauchy-GWO Cauchy operator has been integrated in which first two new wolves are generated with the help of Cauchy distributed random numbers and then another new wolf is generated by taking the convex combination of these new wolves. The performance of Cauchy-GWO is exhibited on standard IEEE CEC 2014 benchmark problem set. Statistical analysis of the results on CEC 2014 benchmark set and popular evaluation criteria, Performance Index (PI) proves that Cauchy-GWO outperforms GWO in terms of error values defined in IEEE CEC 2014 benchmarks collection. Later on in the paper, GWO and Cauchy-GWO algorithms have been used to solve three well-known engineering application problems and two problems of reliability. From the analysis conducted in the present paper, it can be concluded that the proposed algorithm, Cauchy-GWO is reliable and efficient algorithm to solve continuous benchmark test problems, as well as real-life applications problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号