首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
We study the maximum-flow algorithm of Goldberg and Tarjan and show that the largest-label implementation runs inO(n 2 m) time. We give a new proof of this fact. We compare our proof with the earlier work by Cheriyan and Maheswari who showed that the largest-label implementation of the preflow-push algorithm of Goldberg and Tarjan runs inO(n 2 m) time when implemented with current edges. Our proof that the number of nonsaturating pushes isO(n 2 m), does not rely on implementing pushes with current edges, therefore it is true for a much larger family of largest-label implementation of the preflow-push algorithms.Research performed while the author was a Ph.D. student at Cornell University and was partially supported by the Ministry of Education of the Republic of Turkey through the scholarship program 1416.  相似文献   

3.
4.
Three algorithms for computing the coefficients of translated polynomials are discussed and compared from the point of view of complexity. The two classical translation algorithms based on explicit application of the Taylor expansion theorem and the Ruffini-Horner method, respectively, have complexityO (n 2). A third algorithm based on the fast Fourier transform is shown to have complexityO (n logn). However, when the cost of arithmetic operations is explicitly taken into consideration, the Ruffini-Horner algorithm is one order of magnitude better than the one based on the Taylor expansion and competes quite well with the algorithm based on the fast Fourier transform.  相似文献   

5.
Here, this author attempts to tie the concept of active perception to attentive processing in general and to the complexity level analysis of visual search described previously; the aspects of active vision as they have been currently described form a subset of the full spectrum of attentional capabilities. Our approach is motivated by the search requirements of vision tasks and thus we cast the problem as one of search preceding the application of methods for shape-from-X, optical flow, etc., and recognition in general. This perspective permits a dimension of analysis not found in current formulations of the active perception problem, that of computational complexity. This article describes where the active perception paradigm does and does not provide computational benefits along this dimension. A formalization of the search component of active perception is presented in order to accomplish this. The link to attentional mechanisms is through the control of data acquisition and processing by the active process. It should be noted that the analysis performed here applies to the general hypothesize-and-test search strategy, to time-varying scenes as well as to the general problem of integration of successive fixations. Finally, an argument is presented as to why this framework is an extension of the behaviorist approaches to active vision.  相似文献   

6.
The complexities of various search algorithms are considered in terms of time, space, and cost of solution path. It is known that breadth-first search requires too much space and depth-first search can use too much time and doesn't always find a cheapest path. A depth-first iterative-deepening algorithm is shown to be asymptotically optimal along all three dimensions for exponential tree searches. The algorithm has been used successfully in chess programs, has been effectively combined with bi-directional search, and has been applied to best-first heuristic search as well. This heuristic depth-first iterative-deepening algorithm is the only known algorithm that is capable of finding optimal solutions to randomly generated instances of the Fifteen Puzzle within practical resource limits.  相似文献   

7.
8.
9.
A Nonlinear Stepsize Control (NSC) framework has been proposed by Toint [Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization, Optim.Methods Softw. 28 (2013), pp. 82–95] for unconstrained optimization, generalizing many trust-region and regularization algorithms. More recently, worst-case complexity bounds for the generic NSC framework were proved by Grapiglia et al. [On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization, Math. Program. 152 (2015), pp. 491–520] in the context of non-convex problems. In this paper, improved complexity bounds are obtained for convex and strongly convex objectives.  相似文献   

10.
分析了组合两种算法所需的空间复杂度在何种情况下为原算法的空间复杂度之和的问题,即空间复杂度的保持问题。通过形式化oracle查询方式,证明了在后续oracle查询和前面所有的oracle回复都不相关,即非适应性查询情况下,算法组合将保持空间复杂性,但在适应性查询情况时不一定成立。  相似文献   

11.
The notion of irreducible forms of systems of linear differential equations with formal power series coefficients as defined by Moser [Moser, J., 1960. The order of a singularity in Fuchs’ theory. Math. Z. 379–398] and its generalisation, the super-irreducible forms introduced in Hilali and Wazner [Hilali, A., Wazner, A., 1987. Formes super-irréductibles des systèmes différentiels linéaires. Numer. Math. 50, 429–449], are important concepts in the context of the symbolic resolution of systems of linear differential equations [Barkatou, M., 1997. An algorithm to compute the exponential part of a formal fundamental matrix solution of a linear differential system. Journal of App. Alg. in Eng. Comm. and Comp. 8 (1), 1–23; Pflügel, E., 1998. Résolution symbolique des systèmes différentiels linéaires. Ph.D. Thesis, LMC-IMAG; Pflügel, E., 2000. Effective formal reduction of linear differential systems. Appl. Alg. Eng. Comm. Comp., 10 (2) 153–187]. In this paper, we reduce the task of computing a super-irreducible form to that of computing one or several Moser-irreducible forms, using a block-reduction algorithm. This algorithm works on the system directly without converting it to more general types of systems as needed in our previous paper [Barkatou, M., Pflügel, E., 2007. Computing super-irreducible forms of systems of linear differential equations via Moser-reduction: A new approach. In: Proceedings of ISSAC’07. ACM Press, Waterloo, Canada, pp. 1–8]. We perform a cost analysis of our algorithm in order to give the complexity of the super-reduction in terms of the dimension and the Poincaré-rank of the input system. We compare our method with previous algorithms and show that, for systems of big size, the direct block-reduction method is more efficient.  相似文献   

12.
13.
Four classifications of artificial intelligence search techniques are discussed: unidirectional uniprocessor, bidirectional uniprocessor, unidirectional multiprocessor, and bidirectional multiprocessor search techniques. Wave-shaping PBA* (WS-PBA*) and search-space-clustering PBA*, (SSC-PBA*), two bidirectional AI search techniques, are compared. It is concluded that by maintaining a small number of formed clusters SSC-PBA* will be significantly faster than major wave-shaping bidirectional search algorithms  相似文献   

14.
A new search strategy, called depth-m search, is proposed for branch-and-bound algorithms, wherem is a parameter to be set by the user. In particular, depth-1 search is equivalent to the conventional depth-first search, and depth- search is equivalent to the general heuristic search (including best-bound search as a special case). It is confirmed by computational experiment that the performance of depth-m search continuously changes from that, of depth-first search to that of heuristic search, whenm is changed from 1 to . The exact upper bound on the size of the required memory space is derived and shown to be bounded byO(nm), wheren is the problem size. Some methods for controllingm during computation are also proposed and compared, to carry out the entire computation within a given memory space bound.  相似文献   

15.
Space complexity of estimation of distribution algorithms   总被引:1,自引:0,他引:1  
In this paper, we investigate the space complexity of the Estimation of Distribution Algorithms (EDAs), a class of sampling-based variants of the genetic algorithm. By analyzing the nature of EDAs, we identify criteria that characterize the space complexity of two typical implementation schemes of EDAs, the factorized distribution algorithm and Bayesian network-based algorithms. Using random additive functions as the prototype, we prove that the space complexity of the factorized distribution algorithm and Bayesian network-based algorithms is exponential in the problem size even if the optimization problem has a very sparse interaction structure.  相似文献   

16.
This paper is a new step in the development of linguistic geometry. This formal theory is intended to discover and generalize the inner properties of human expert heuristics, which have been successful in a certain class of complex control systems, and apply them to different systems. In this paper, we investigate heuristics extracted in the form of hierarchical networks of planning paths of autonomous agents. Employing linguistic geometry tools the dynamic hierarchy of networks is represented as a hierarchy of formal attribute languages. The main ideas of this methodology are shown in the paper on two pilot examples of the solution of complex optimization problems. The first example is a problem of strategic planning for the air combat, in which concurrent actions of four vehicles are simulated as serial interleaving moves. The second example is a problem of strategic planning for the space comb of eight autonomous vehicles (with interleaving moves) that requires generation of the search tree of the depth 25 with the branching factor 30. This is beyond the capabilities of modern and conceivable future computers (employing conventional approaches). In both examples the linguistic geometry tools showed deep and highly selective searches in comparison with conventional search algorithms. For the first example a sketch of the proof of optimality of the solution is considered.  相似文献   

17.
Analyzing the average-case complexity of algorithms is a very practical but very difficult problem in computer science.In the past few years,we have demonstrated that Kolmogorov complexity is an improtant tool for analyzing the average-case complexity of algorithms.We have developed the incompressibility method.In this paper,sereral simple examples are used to further demonstrate the power and simplicity of such method.We prove bounds on the average-case number of stacks(queues)required for sorting sequential or parallel Queuesort or Stacksort.  相似文献   

18.
In this note we give a short proof of a recent result in [2] (Integral Equations Operator Theory 25 (1996) 182–198) which characterizes the admissibility of unbounded observation operators in linear, infinite-dimensional control theory. Moreover, we present the analogous result for the admissibility of unbounded control operators.  相似文献   

19.
The purpose of this paper is to outline various results regarding the computational complexity and the algorithms of nonmonotonic entailment in different coherence‐based approaches. Starting from a (non necessarily consistent) belief base E and a pre‐order on E, we first present different mechanisms for selecting preferred consistent subsets. Then we present different entailment principles in order to manage these multiple subsets. The crossing point of each generation mechanism m and each entailment principle p defines an entailment relation which we study from the computational complexity point of view. The results are not very encouraging since the complexity of all these nonmonotonic entailment relations is, in most restricted languages, larger than the complexity of monotonic entailment. So, we decided to extend Binary Decision Diagrams technics, which are well suited to the task of solving NP‐hard logic‐based problems. Both theoretical and experimental results are described along this line in the last sections. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号