首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
利用方差分形快速编码算法   总被引:3,自引:1,他引:3  
提出一个基于图像块方差的新的快速分形编码算法,利用候选主块和序列块间当前最小异变象差和方差,通过排除不可取的主块,大大地减少了为搜索主块而寻找每个序列块最佳匹配的主块数,该算法在较短的时间内产生与常规的满搜索近乎一致的分形编码。  相似文献   

2.
In this paper we present efficient approximation algorithms for the distance selection problem. Our technique is based on the well-separated pair decomposition proposed in [8]. Received May 16, 1999; revised June 5, 2001.  相似文献   

3.
矩是图像处理、模式识别和计算机视觉中的通用技术之一.本文逐一介绍了几何矩和正交矩的各种快速算法,并对其性能进行了评估.几何矩快速算法包括图像变换、Delta方法、拐点方法、Green定理法、图像块表示法、可重构网格法、SIMD阵列法和VLSI算法.与几何矩相比,正交矩快速算法种类较少.  相似文献   

4.

We construct new fast evaluation algorithms for elementary algebraic and inverse functions based on application of two methods: A.A. Karatsuba’s method of 1960 and the author’s FEE method of 1990. The computational complexity is close to the optimal. The algorithms admit partial parallelization.

  相似文献   

5.
Fast Algorithms for max independent set   总被引:1,自引:0,他引:1  
We first propose a method, called “bottom-up method” that, informally, “propagates” improvement of the worst-case complexity for “sparse” instances to “denser” ones and we show an easy though non-trivial application of it to the min set cover problem. We then tackle max independent set. Here, we propagate improvements of worst-case complexity from graphs of average degree?d to graphs of average degree greater than?d. Indeed, using algorithms for max independent set in graphs of average degree 3, we successively solve max independent set in graphs of average degree 4, 5 and?6. Then, we combine the bottom-up technique with measure and conquer techniques to get improved running times for graphs of maximum degree?5 and?6 but also for general graphs. The computation bounds obtained for max independent set are?O ?(1.1571 n ), O ?(1.1895 n ) and?O ?(1.2050 n ), for graphs of maximum (or more generally average) degree?4, 5 and?6 respectively, and?O ?(1.2114 n ) for general graphs. These results improve upon the best known results for these cases for polynomial space algorithms.  相似文献   

6.
在分析二维离散Walsh变换形式化描述的基础上,以二维Walsh变换为例设计了一类多维离散型walsh变换的快速算法,分析了该算法的性能,指出这类算法形式多样,应用广泛。  相似文献   

7.
Fast Theta-Subsumption with Constraint Satisfaction Algorithms   总被引:1,自引:0,他引:1  
Relational learning and Inductive Logic Programming (ILP) commonly use as covering test the -subsumption test defined by Plotkin. Based on a reformulation of -subsumption as a binary constraint satisfaction problem, this paper describes a novel -subsumption algorithm named Django,1 which combines well-known CSP procedures and -subsumption-specific data structures. Django is validated using the stochastic complexity framework developed in CSPs, and imported in ILP by Giordana et Saitta. Principled and extensive experiments within this framework show that Django improves on earlier -subsumption algorithms by several orders of magnitude, and that different procedures are better at different regions of the stochastic complexity landscape. These experiments allow for building a control layer over Django, termed Meta-Django, which determines the best procedures to use depending on the order parameters of the -subsumption problem instance. The performance gains and good scalability of Django and Meta-Django are finally demonstrated on a real-world ILP task (emulating the search for frequent clauses in the mutagenesis domain) though the smaller size of the problems results in smaller gain factors (ranging from 2.5 to 30).  相似文献   

8.
TwoAlgorithmsforFastPolyhedronRay-TracingZhangQianShiJiaoyingCaiHongCAD&CGStateKeyLab.,ZhejiangUniversity,310027FoshanEnterpr...  相似文献   

9.
杨文彬 《计算机仿真》2012,29(3):327-329,393
提出一种快速身份认证方法。待验证用户提出证书申请后,利用门限的思想将证书颁发的任务分派到每个认证参与者身上,通过计算得到的认证参与者的子证书集合,完成身份认证,避免了所有认证工作都在认证中心进行计算造成的认证效率低的问题。实验证明,这种方法能够快速完成用户的身份认证,同时保证了认证的安全性,取得了满意的结果。  相似文献   

10.
快速小波变换的加速算法(I)   总被引:1,自引:0,他引:1  
1 自适应方法对小波变换的迫切期望图像、语音等处理技术的研究加快了小波理论的发展[1~9],今天信号处理技术随着通讯技术、计算机或机器人感知技术(实现计算机视觉、听觉、嗅觉、温度、运动等方面感知能力的技术)的飞速发展必将迎来更快发展.正如文[1]中所述:忽然间不可避免地涌现出了各种各样的非线性算法,从而也打开了信号处理通往现代数学的大门.除了传统的应用如信号传输、编码和信号恢复之外,信号处理也进入了信息分析的领域.”  相似文献   

11.
12.
Fast Algorithms for the Density Finding Problem   总被引:1,自引:0,他引:1  
We study the problem of finding a specific density subsequence of a sequence arising from the analysis of biomolecular sequences. Given a sequence A=(a 1,w 1),(a 2,w 2),…,(a n ,w n ) of n ordered pairs (a i ,w i ) of numbers a i and width w i >0 for each 1≤in, two nonnegative numbers , u with u and a number δ, the Density Finding Problem is to find the consecutive subsequence A(i *,j *) over all O(n 2) consecutive subsequences A(i,j) with width constraint satisfying w(i,j)=∑ r=i j w r u such that its density is closest to δ. The extensively studied Maximum-Density Segment Problem is a special case of the Density Finding Problem with δ=∞. We show that the Density Finding Problem has a lower bound Ω(nlog n) in the algebraic decision tree model of computation. We give an algorithm for the Density Finding Problem that runs in optimal O(nlog n) time and O(nlog n) space for the case when there is no upper bound on the width of the sequence, i.e., u=w(1,n). For the general case, we give an algorithm that runs in O(nlog 2 m) time and O(n+mlog m) space, where and w min=min  r=1 n w r . As a byproduct, we give another O(n) time and space algorithm for the Maximum-Density Segment Problem. Grants NSC95-2221-E-001-016-MY3, NSC-94-2422-H-001-0001, and NSC-95-2752-E-002-005-PAE, and by the Taiwan Information Security Center (TWISC) under the Grants NSC NSC95-2218-E-001-001, NSC95-3114-P-001-002-Y, NSC94-3114-P-001-003-Y and NSC 94-3114-P-011-001.  相似文献   

13.
运动估计快速块匹配算法   总被引:16,自引:1,他引:16  
基于块的运动估计是视频压缩国际标准中广泛采用的关键技术。在对目前运动估计快速块匹配算法研究的基础上,描述了运动估计的原理;揭示了在图像质量、搜索速度和压缩码率等方面提高算法效率时存在的3类主要问题:初始搜索点的选择、匹配准则和搜索策略;分别阐述了目前常用的解决这3类问题的方法,并进行了比较和分析;提出了对运动估计算法的一些展望。  相似文献   

14.
Fast Learning Algorithms for Feedforward Neural Networks   总被引:7,自引:0,他引:7  
In order to improve the training speed of multilayer feedforward neural networks (MLFNN), we propose and explore two new fast backpropagation (BP) algorithms obtained: (1) by changing the error functions, in case using the exponent attenuation (or bell impulse) function and the Fourier kernel function as alternative functions; and (2) by introducing the hybrid conjugate-gradient algorithm of global optimization for dynamic learning rate to overcome the conventional BP learning problems of getting stuck into local minima or slow convergence. Our experimental results demonstrate the effectiveness of the modified error functions since the training speed is faster than that of existing fast methods. In addition, our hybrid algorithm has a higher recognition rate than the Polak-Ribieve conjugate gradient and conventional BP algorithms, and has less training time, less complication and stronger robustness than the Fletcher-Reeves conjugate-gradient and conventional BP algorithms for real speech data.  相似文献   

15.
Fast and Robust General Purpose Clustering Algorithms   总被引:3,自引:0,他引:3  
General purpose and highly applicable clustering methods are usually required during the early stages of knowledge discovery exercises. k-MEANS has been adopted as the prototype of iterative model-based clustering because of its speed, simplicity and capability to work within the format of very large databases. However, k-MEANS has several disadvantages derived from its statistical simplicity. We propose an algorithm that remains very efficient, generally applicable, multidimensional but is more robust to noise and outliers. We achieve this by using medians rather than means as estimators for the centers of clusters. Comparison with k-MEANS, EXPECTATION and MAXIMIZATION sampling demonstrates the advantages of our algorithm.  相似文献   

16.
In answer to Ko’s question raised in 1983, we show that an initial value problem given by a polynomial-time computable, Lipschitz continuous function can have a polynomial-space complete solution. The key insight is simple: the Lipschitz condition means that the feedback in the differential equation is weak. We define a class of polynomial-space computation tableaux with equally weak feedback, and show that they are still polynomial-space complete. The same technique also settles Ko’s two later questions on Volterra integral equations.  相似文献   

17.
We present an improved average case analysis of the maximum cardinality matching problem. We show that in a bipartite or general random graph on n vertices, with high probability every non-maximum matching has an augmenting path of length O(log n). This implies that augmenting path algorithms like the Hopcroft-Karp algorithm for bipartite graphs and the Micali-Vazirani algorithm for general graphs, which have a worst case running time of O(m√n), run in time O(m log n) with high probability, where m is the number of edges in the graph. Motwani proved these results for random graphs when the average degree is at least ln (n) [Average Case Analysis of Algorithms for Matchings and Related Problems, Journal of the ACM, 41(6):1329-1356, 1994]. Our results hold if only the average degree is a large enough constant. At the same time we simplify the analysis of Motwani.  相似文献   

18.
Support Vector Machines are a family of algorithms for the analysis of data based on convex Quadratic Programming. We derive randomized algorithms for training SVMs, based on a variation of Random Sampling Techniques; these have been successfully used for similar problems. We formally prove an upper bound on the expected running time which is quasilinear with respect to the number of data points and polynomial with respect to the other parameters, i.e., the number of attributes and the inverse of a chosen soft margin parameter. [This is the combined journal version of the conference papers (Balcázar, J.L. et al. in Proceedings of 12th International Conference on Algorithmic Learning Theory (ALT’01), pp. 119–134, [2001]; Balcázar, J.L. et al. in Proceedings of First IEEE International Conference on Data Mining (ICDM’01), pp. 43–50, [2001]; and Balcázar, J.L. et al. in Proceedings of SIAM Workshop in Discrete Mathematics and Data Mining, pp. 19–29, [2002]).] The first and the fourth authors started this research while visiting the Centre de Recerca Matemàtica of the Institute of Catalan Studies in Barcelona. The first author was supported by IST Programme of the EU under contract number IST-1999-14186 (ALCOM-FT), Spanish Government TIC2004-07925-C03-02, and CIRIT 2001SGR-00252. The second author conducted this research while she was with Department of Mathematical and Computing Sciences, Tokyo Institue of Technology, and was supported by a Grant-in-Aid (C-13650444) from Japanese Goverment. The fourth author was supported in part by a Grant-in-Aid for Scientific Research on Priority Areas “Discovery Science” 1998–2000 from Japanese Goverment.  相似文献   

19.
For a set $P$ of $n$ points in the plane and an integer $k \leq n$, consider the problem of finding the smallest circle enclosing at least $k$ points of $P$. We present a randomized algorithm that computes in $O( n k )$ expected time such a circle, improving over previously known algorithms. Further, we present a linear time $\delta$-approximation algorithm that outputs a circle that contains at least $k$ points of $P$ and has radius less than $(1+\delta)r_{opt}(P,k)$, where $r_{opt}(P,k)$ is the radius of the minimum circle containing at least $k$ points of $P$. The expected running time of this approximation algorithm is $O(n + n \cdot\min((1/k\delta^3) \log^2 (1/\delta), k))$.  相似文献   

20.
经典多维标度法(classical multidimensional scaling,CMDS)是一种常用的数据降维和可视化方法。随着数据规模的扩大,CMDS的运算时间急剧增加。为了提高CMDS的计算速度,研究了3种适用于不同距离矩阵的快速算法。通过预先确定枢轴,减少了不必要的距离计算,提出了一种基于FastMap的快速算法。基于分而治之策略,提出了一种新的算法dc MDS(divide-and-conquer based MDS)。通过合理地选择标志点集,确保LMDS(landmark multidimensional scaling)能得到与CMDS一致的解。当样本内在维数远小于样本个数时,这些算法都能得到与CMDS完全一致的解,并且在速度上有大幅提高。实验证实了这3种算法与CMDS的一致性以及高效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号