首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 187 毫秒
1.
探讨了如何为CBR(基于范例的推理)增加对一种特殊的范例类型——时间序列数据的支持.分析了基于谱分析的时间序列相似度比较算法不适用于CBR检索的缺点,并在此基础上设计了一种综合性能很好的CBR检索算法.思路是把时间序列相似度比较转化成一个卷积问题,并用DFT来简化这个卷积的计算.通过对这种CBR检索算法进行了深入的理论分析和认真的实验,结果证明,提出的算法是一个高效的算法.在这个检索算法的基础上,CBR就能够席用到时序数据的分析推理中,具有广阔的应用前景.  相似文献   

2.
A new method,orthogonal algoritm,is presented to compute the logic probabilities(i.e.signal probabilities)accurately,The transfer properties of logic probabilities are studied first,which are useful for the calculation of logic probability of the circuit with random independent inputs.Then the orthogonal algoritm is described to compute the logic probability of Boolean function realized by a combinational circuit.This algorithm can make Boolean function “ORTHOGONAL”so that the logic probabilities can be easily calculated by summing up the logic probabilities of all orthogonal terms of the Booleam function.  相似文献   

3.
充分帧数(SNL)估计是图像序列超分辨率技术走向实际应用的关键问题。本文首次提出了适合实际应用的SNL估计方法从采集到的低分辨率图像序列中选取不同长度(帧数)的子图像序列参与超分辨率处理,得到一个结果图像序列;测量结果图像序列中图像间的差异;通过分析差异曲线估计出SNL。实际数据证明,本文的方法能够准确、稳定地估计出SNL,为图像序列超分辨率技术的工程应用提供有效的支持。  相似文献   

4.
基于邻域原理计算海量数据支持向量的研究   总被引:19,自引:0,他引:19  
张文生  丁辉  王珏 《软件学报》2001,12(5):711-720
使用支持向量机理论计算海量数据的支持向量是相当困难的.为了解决这个问题,提出了基于邻域原理计算支持向量的方法.在对支持向量机原理与邻域原理比较分析的基础上讨论了以下问题:(1)构建了从样本空间经过特征空间到扩维空间的复合内积函数,给出计算支持向量的邻域思想;(2)将支持向量机的理论建立在距离空间上,设计出了计算支持向量的邻域算法,从而把该算法理解为简化计算二次规划的方法;(3)实验结果说明,邻域原理可以有效地解决对海量数据计算支持向量的问题.  相似文献   

5.
The classical Routh's algorithm has the drawback that it involves divisions. Hence if one starts for example with integer or polynomial entries one ends up with rational numbers or rational functions respectively. Using Hurwitz determinants instead one can avoid this drawback in principle. However an efficient way to compute these determinants without introducing fractions has been missing. An algorithm to compute these determinants in a fraction free manner is presented. It is shown that this algorithm is optimal with respect to the growth of the entries.  相似文献   

6.
伴随大数据时代的到来,数据快速保序匹配与检索成为众多大数据应用急需解决的关键问题,通过抽象与归约等措施,数据对象可抽象为具有若干属性的点集或序列,从而将数据匹配问题转化为字符或数字序列匹配问题。提出一种基于相似度过滤的数据保序匹配与检索算法,算法分三步:(1)数据转换,基于幅值变化趋势将原始序列转换为二进制,对序列中任何一个字符,通过判断包括其前后邻居在内的三个点的关系定义二进制序列,准确反映相邻三点之间的凸增长(降低)或凹增长(降低)关系;(2)数据归约,为方便候选序列与模式序列之间的相似度计算,运用基于幅度变化比例的数据归约方法,将候选序列与模式序列均归约到固定区间;(3)相似度计算,为区分不同趋势的凸增长(降低)或凹增长(降低)幅度,通过计算候选序列与模式序列对应点之间的差值绝对值之和作为相似度判断依据,提出基于相似度过滤的快速匹配方法,寻找与模式序列变化趋势一致的子序列集合,并按照相似度大小排序。理论分析与实验结果表明:(1)该算法具有亚线性时间复杂度;(2)该算法能有效解决Chhabra等人算法对数据震荡幅度失控的问题,同时解决数据序列与模式序列分段规律但整体不相似的问题;(3)解决了Chhabra等人算法中对匹配序列排序造成的匹配结果疏漏问题。该方法不仅能更准确、更多地匹配出变化趋势一致的子字符串,同时将多个候选子串根据与模式之间的相似度进行排序,为进一步的数据精确检索提供判断依据。  相似文献   

7.
When studying more than one contingency table at the same time, it should be considered that factorial results may be affected by the differences between the totals of the tables and by the different structures of the relationships between such tables. Two new methods have recently appeared that seek to solve this problem based on correspondence analysis, using certain characteristics of multiple factorial analysis. These methods are Simultaneous Analysis (SA) and Multiple Factorial Analysis for Contingency Tables (MFACT). The two methods are very similar, but the main difference between them lies in the allocation of the weights attributed to each table. Similarities and differences between them are discussed and a brief example is provided to show the factorial results provided by each one.  相似文献   

8.
We propose a new method, called 'size leap' algorithm, of search for motifs of maximum size and common to two fragments at least. It allows the creation of a reduced database of motifs from a set of sequences whose size obeys the series of Fibonacci numbers. The convenience lies in the efficiency of the motif extraction. It can be applied in the establishment of overlap regions for DNA sequence reconstruction and multiple alignment of biological sequences. The method of complete DNA sequence reconstruction by extraction of the longest motifs ('anchor motifs') is presented as an application of the size leap algorithm. The details of a reconstruction from three sequenced fragments are given as an example.  相似文献   

9.
A model is proposed that can be used to classify algorithms as inherently sequential. The model captures the internal computations of algorithms. Previous work in complexity theory has focused on the solutions algorithms compute. Direct comparison of algorithms within the framework of the model is possible. The model is useful for identifying hard to parallelize constructs that should be avoided by parallel programmers. The model's utility is demonstrated via applications to graph searching. A stack breadth-first search (BFS) algorithm is analyzed and proved inherently sequential. The proof technique used in the reduction is a new one. The result for stack BFS sharply contrasts with a result showing that a queue based BFS algorithm is in NC. An NC algorithm to compute greedy depth-first search numbers in a dag is presented, and a result proving that a combination search strategy called breadth-depth search is inherently sequential is also given.  相似文献   

10.
该文给出基因组Transhocation排序问题的一个改进多项式算法,原算法所有存储空间O(n),时间复杂度为O(n^3),文中改进算法仍采用O(n)存储空间,时间复杂度为O(n^2logn),具体地,将计算Translocation距离的时间复杂度由O(n^3)改进为O(n^2),将计算Translocation序列的时间复杂度由O(n^3)改进为O(n^2logn).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号