首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
P. D. Smith 《Software》1991,21(10):1065-1074
Sunday devised string matching methods that are generally faster than the Boyer-Moore algorithm. His fastest method used statistics of the language being scanned to determine the order in which character pairs are to be compared. In this paper the performances of similar, but language-independent, algorithms are examined. Results comparable with language-based algorithms can be achieved with an adaptive technique. In terms of character comparisons, a faster algorithm than Sunday's is constructed by using the larger of two pattern shifts.  相似文献   

2.
针对多输入多输出空间多路复用系统,提出了一种基于代价函数和排序模式的多个并行分支的最小均方误差连续干扰消除检测器;具体而言,设计了选择规则来选择代价函数性能最好的分支,并通过利用不同的检测排序模式使得每个分支中的SIC算法按照信号干扰噪声比由高到低来检测信号,从而实现完全检测分集;为了进一步降低算法的计算复杂度,还提出了一种采用递归最小二乘算法的有效自适应接收机来更新滤波器权值向量,从而获得基于递归最小二乘算法的MB-SIC接收机的自适应实现;此外,还对提出的检测器在比特差错概率性能方面进行了分析;仿真结果表明,相比于现有的检测算法,提出的算法不仅具有较低的计算复杂度,而且能获得更好的误码率性能。  相似文献   

3.
Abstract

One method to overcome the notorious efficiency problems of logical reasoning algorithms in AI has been to combine a general-purpose reasoner with several special-purpose reasoners for commonly used subtasks. In this paper we are using Schubert's (Schubert et al. 1983, 1987) method of implementing a special-purpose class reasoner. We show that it is possible to replace Schubert's preorder number class tree by a preorder number list without loss of functionality. This form of the algorithm lends itself perfectly towards a parallel implementation,1 and we describe design, coding and testing of such an implementation. Our algorithm is practically independent of the size of the class list, and even with several thousand nodes learning times are under a second and retrieval times are under 500 ms.  相似文献   

4.
《国际计算机数学杂志》2012,89(6):1315-1328
In this paper we present a method for Catalan number decomposition in the expressions of the form (2+i). This method gives convex polygon triangulations in Hurtado–Noy ordering. Therefore, we made a relationship between the expressions and the ordering mentioned above. The corresponding algorithm for Catalan number decomposition is developed and implemented in Java, as well as the algorithm which generates convex polygon triangulations. At the end, we have provided the comparison of Hurtado's algorithm and our algorithm based on the decomposition method.  相似文献   

5.
This paper discusses certain extensions of Smith's method to digital, cascaded and multivariate plants.

Difficulties inherent in Smith's method are discussed briefly and additions to the method suggested for plants where drift and plant/model mismatch give rise to design difficulties.

Certain insights are given into the problem of mismatch of the delay model.  相似文献   

6.
We introduce a procedure, based on the max-min clustering method, that identifies a fixed order of training pattern presentation for fuzzy adaptive resonance theory mapping (ARTMAP). This procedure is referred to as the ordering algorithm, and the combination of this procedure with fuzzy ARTMAP is referred to as ordered fuzzy ARTMAP. Experimental results demonstrate that ordered fuzzy ARTMAP exhibits a generalization performance that is better than the average generalization performance of fuzzy ARTMAP, and in certain cases as good as, or better than the best fuzzy ARTMAP generalization performance. We also calculate the number of operations required by the ordering algorithm and compare it to the number of operations required by the training phase of fuzzy ARTMAP. We show that, under mild assumptions, the number of operations required by the ordering algorithm is a fraction of the number of operations required by fuzzy ARTMAP.  相似文献   

7.
On the Origin of Objects (Smith 1996) is, at heart, an extended search for a non-circular and non-reductive characterization of two key notions: intentionality (the content or ‘aboutness’ distinctive of mental states) and computation (the familiar but elusive tool of much cognitive scientific explanation). Only a non-circular and non-reductive account of these key notions can, Smith believes, provide a secure platform for a proper understanding of the mind. The project has both a negative and a positive aspect. Negatively, Smith rejects views that attempt to identify the key notions with lower-level physical properties, arguing instead for a more abstract and systemic understanding. This negative effort occupies Part I of the book (Analysis). In Part II (Construction), we encounter the positive side of Smith's proposal: an attempt to develop a non-reductive analysis of computation and meaning able to meet the (rather severe) requirements laid out in Part I. One purpose of this critical review is to lay out this project in fairly simple terms. This is necessary since Smith's own treatment and prose sometimes obscures the flow of the argument. I suggest that, properly understood, Smith's proposal bears a clear affinity to ideas emerging from the dynamical systems movement within Cognitive Science, and that this tie-in can help put flesh on several of the more metaphorical characterizations in the book. My main criticism is that the book ultimately fails to provide an account able to meet Smith's own requirements for a truly non-reductive account of intentionality. This is especially the case regarding Smith's commitment to licensing a partition of the world based on no a priori assumptions whatsoever. The exercise is a valuable one, however, since it forces us to look harder at some foundational assumptions and at least hints at a new and refreshing perspective: one in which the key explanatory relations are grounded in facts about human practice and pragmatically established social norms.  相似文献   

8.
基于邻接矩阵全文索引模型的文本压缩技术   总被引:1,自引:0,他引:1  
基于不定长单词的压缩模型的压缩效率高于基于字符的压缩模型,但是它的最优符号集的寻找算法是NP完全问题,本文提出了一种基于贪心算法的计算最小汉字平均熵的方法,发现一个局部最优的单词表。这种方法的关键是将文本的邻接矩阵索引作为统计基础,邻接矩阵全文索引是论文提出的一种新的全文索引模型,它忠实地反映了原始文本,很利于进行原始文本的初步统计,因此算法效率得以提高,其时间复杂度与文本的汉字种数成线性关系,能够适应在线需要。并且,算法生成的压缩模型的压缩比是0.47,比基于字的压缩模型的压缩效率提高25%。  相似文献   

9.
10.
为大幅度减少采集路面不平度信号的存储空间,提高采集速度,基于压缩感知理论针对标准路面的不平度信号进行压缩采样和重构。首先验证了B级路面不定度信号在频域下的近似稀疏性,并进行了信号的压缩采样。针对现阶段凸优化方法和常用的三种贪婪算法的不足,提出一种改进的模拟退火算法与子空间追踪算法相结合的稀疏度自适应匹配追踪算法,利用改进的模拟退火算法快速搜索匹配最优的稀疏度,并采用子空间追踪算法快速重构信号。仿真实验对比五种重构方法,结果表明,凸优化方法精度较高,耗时过长;OMP算法和SP算法耗时极短,但需要预先进行实验来估测信号的稀疏度,实用性低;SAMP算法能实现稀疏度的自适应匹配,但匹配的误差较大,且耗时较长;提的新方法具有良好的精度和较快的执行速度,R-squares和耗时的均值分别为0.9837和2.77 s,稀疏度估测效果较好,且采样点数的增加不影响算法重构信号的速度。  相似文献   

11.
王楠  吴云 《计算机应用研究》2023,40(4):1154-1159
由于MySQL使用配置参数的方式调节线性预读的阈值以及冷热LRU算法的冷热比例,导致缓冲区存在性能瓶颈。针对以上问题,提出一种缓冲区自适应管理的方法,该方法通过遗憾最小化的强化在线学习技术设计了自适应阈值调整算法以及自适应冷热缓存替换算法。首先,对MySQL中的预读算法以及冷热缓存替换算法进行深入研究,明确了预读阈值以及冷热比例大小对两种算法的具体影响;其次,通过FIFO历史队列以及增加辅助字段的方式,设计了一套参数评估流程,实时评估当前参数是偏大或偏小;最后,设计了一种参数调整模型,该模型利用MySQL原生的预读算法以及缓存替换算法的性能监控指标,实现对参数的合理调整。在FIU数据集上进行了900组仿真实验,实验表明,相较于MySQL原生的基准预读算法以及冷热缓存算法,自适应后的两种算法能够在基本不牺牲算法运行速度的基础上,有效减少8%的磁盘I/O以及增加24%的缓存命中率;相对于最新的缓存替换算法,自适应后的冷热缓存替换算法在保证缓存命中率的前提下,将速度提升至1.6倍。  相似文献   

12.
In the direct solution of sparse symmetric and positive definite linear systems, finding an ordering of the matrix to minimize the height of the elimination tree (an indication of the number of parallel elimination steps) is crucial for effectively computing the Cholesky factor in parallel. This problem is known to be NP-hard. Though many effective heuristics have been proposed, the problems of how good these heuristics are near optimal and how to further reduce the height of the elimination tree remain unanswered. This paper is an effort for this investigation. We introduce a genetic algorithm tailored to this parallel ordering problem, which is characterized by two novel genetic operators, adaptive merge crossover and tree rotate mutation. Experiments showed that our approach is cost effective in the number of generations evolved to reach a better solution in reducing the height of the elimination tree.  相似文献   

13.
一种改进的Wu-Manber多模式匹配算法及应用   总被引:8,自引:0,他引:8  
本文针对Wu-Manber多模式匹配算法在处理后缀模式情况下的不足,给出了一种改进的后缀模式处理算法,减少了匹配过程中字符比较的次数,提高了算法的运行效率。本文在随机选择的TREC2000的52,067篇文档上进行了全文检索实验, 对比了Wu-Manber算法、使用后缀模式的改进算法、不使用后缀模式的简单改进等三种算法的匹配过程中字符比较的次数。实验结果说明,本文的改进能够比较稳定的减少匹配过程中字符比较的次数,提高匹配的速度和效率。  相似文献   

14.
15.
针对降低无线传感网能耗和保证数据精度之间的矛盾,提出了自适应采样数据并利用压缩感知进行压缩的方法.传统的基于压缩感知的无线传感器数据压缩,只采样部分节点的数据,对于未被采样节点感知到的突发事件很有可能发生漏检情况.本文方法检测所有节点上传的数据再进行压缩,可以有效避免漏检情况的发生.根据信号具有时间相关性的特点,本文采用基于方差分析ANOVA(Analysis of Variance)原理改进的传感器自适应采样频率方法,并考虑节点剩余能量,减少平稳信号的采集次数,均衡网络节点能耗.在LEACH协议基础上,对簇内数据进行压缩感知的方法对数据进行压缩从而减少数据的空间相关性并传输到汇聚节点,以减少网络整体的能量消耗.针对可能的漏报情况,提出一种改进的局部事件监测算法-滑动窗口局部事件监测SW-LED(Sliding Window-Local Event Detection)算法,实现了实时准确的异常检测和预警.实验结果表明本文方法既可以有效的均衡网络节点能耗以提高网络生存周期,同时保证了数据的精度,对于异常情况的识别率也有很大的提高.  相似文献   

16.
An adaptive, multipole-accelerated technique is presented for the fast computation of capacitances of conducting structures that reside in stratified dielectric media. This technique is an extension of a previously reported method of moments based approach that uses a nonadaptive fast-multipole algorithm in conjunction with a closed-form Green's function for a stratified medium. In the proposed adaptive technique, the specific spatial arrangement of the images introduced by this Green's function is exploited to significantly reduce the excessive memory requirements associated with the earlier technique. It is shown that the reduction in memory is attained in such a manner that both speed and accuracy are not adversely affected. © 1996 John Wiley & Sons, Inc.  相似文献   

17.
孙琛琛  申德荣  寇月  聂铁铮  于戈 《软件学报》2016,27(9):2303-2319
实体识别是数据质量的一个重要方面,对于大数据处理不可或缺.已有的实体识别研究工作聚焦于数据对象相似度算法、分块技术和监督的实体识别技术,而非监督的实体识别中匹配决定的问题很少被涉及.提出一种面向实体识别的聚类算法来弥补这个缺失.利用数据对象及其相似度构建带权重的数据对象相似图.聚类过程中,利用相似图上重启式随机游走来动态地计算类簇与结点的相似度.聚类的基本逻辑是,类簇迭代地吸收离它最近的结点.提出数据对象排序方法来优化聚类的顺序,提高聚类精确性;提出了优化的随机游走平稳概率分布计算方法,降低聚类算法开销.通过在真实数据集和生成数据集上的对比实验,验证了该算法的有效性.  相似文献   

18.
We present an efficient algorithm for the transformation of a Gröbner basis of a zero-dimensional ideal with respect to any given ordering into a Gröbner basis with respect to any other ordering. This algorithm is polynomial in the degree of the ideal. In particular the lexicographical Gröbner basis can be obtained by applying this algorithm after a total degree Gröbner basis computation: it is usually much faster to compute the basis this way than with a direct application of Buchberger's algorithm.  相似文献   

19.
A non-zero-approaching adaptive learning rate is proposed to guarantee the global convergence of Oja's principal component analysis (PCA) learning algorithm. Most of the existing adaptive learning rates for Oja's PCA learning algorithm are required to approach zero as the learning step increases. However, this is not practical in many applications due to the computational round-off limitations and tracking requirements. The proposed adaptive learning rate overcomes this shortcoming. The learning rate converges to a positive constant, thus it increases the evolution rate as the learning step increases. This is different from learning rates which approach zero which slow the convergence considerably and increasingly with time. Rigorous mathematical proofs for global convergence of Oja's algorithm with the proposed learning rate are given in detail via studying the convergence of an equivalent deterministic discrete time (DDT) system. Extensive simulations are carried out to illustrate and verify the theory derived. Simulation results show that this adaptive learning rate is more suitable for Oja's PCA algorithm to be used in an online learning situation.  相似文献   

20.
针对无线传感器网络(wireless sensor network,WSN)存在节点能量受限、测量精度低、生存期短等问题,提出一种基于异常数据预处理和自适应估计加权融合算法(abnormal data-preprocessing adaptive estimation weighting fusion,ADAEWF)。为了提高算法可靠性,提出了基于异常数据检测、简单多数原则和节点综合支持度函数的数据预处理机制;为了减小测量误差对融合精度的影响,基于分批估计和自适应理论对节点测量值进行自适应估计加权数据融合;然后,建立了WSN仿真模型,并分别获得了ADAEWF、自适应预测加权数据融合算法(adaptive forecast weighting data fusion,AFWDF)和算术平均值法下融合结果的均方误差和网络有效生存期。仿真结果显示:ADAEWF算法融合精度和网络有效生存期均优于AFWDF和算术平均值法,表明ADAEWF算法在提高融合数据有效性、网络有效生存期和融合精度方面具有优越性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号