共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we consider the problem of minimization of deterministic finite automata (DFA) with reference to Hopcroft’s algorithm. Hopcroft’s algorithm has several degrees of freedom, so there can exist different executions that can lead to different sequences of refinements of the set of the states up to the final partition. We find an infinite family of binary automata for which such a process is unique, whatever strategy is chosen. Some recent papers (cf. Berstel and Carton (2004) [3], Castiglione et al. (2008) [6] and Berstel et al. (2009) [1]) have been devoted to find families of automata for which Hopcroft’s algorithm has its worst execution time. They are unary automata associated with circular words. However, automata minimization can be achieved also in linear time when the alphabet has only one letter (cf. Paige et al. (1985) [14]), but such a method does not seem to extend to larger alphabet. So, in this paper we face the tightness of Hopcroft’s algorithm when the alphabet contains more than one letter. In particular we define an infinite family of binary automata representing the worst case of Hopcroft’s algorithm, for each execution. They are automata associated with particular trees and we deepen the connection between the refinement process of Hopcroft’s algorithm and the combinatorial properties of such trees. 相似文献
2.
In this paper we investigate the entanglement nature of quantum states generated by Grover’s search algorithm by means of algebraic geometry. More precisely we establish a link between entanglement of states generated by the algorithm and auxiliary algebraic varieties built from the set of separable states. This new perspective enables us to propose qualitative interpretations of earlier numerical results obtained by M. Rossi et al. We also illustrate our purpose with a couple of examples investigated in details. 相似文献
3.
4.
Andrei Păun Mihaela Păun Alfonso Rodríguez-Patón 《Theoretical computer science》2009,410(24-25):2424-2430
We show that the absolute worst case time complexity for Hopcroft’s minimization algorithm applied to unary languages is reached only for deterministic automata or cover automata following the structure of the de Bruijn words. A previous paper by Berstel and Carton gave the example of de Bruijn words as a language that requires steps in the case of deterministic automata by carefully choosing the splitting sets and processing these sets in a FIFO mode for the list of the splitting sets in the algorithm. We refine the previous result by showing that the Berstel/Carton example is actually the absolute worst case time complexity in the case of unary languages for deterministic automata. We show that the same result is valid also for the case of cover automata and an algorithm based on the Hopcroft’s method used for minimization of cover automata. We also show that a LIFO implementation for the splitting list will not achieve the same absolute worst time complexity for the case of unary languages both in the case of regular deterministic finite automata or in the case of the deterministic finite cover automata as defined by S. Yu. 相似文献
5.
Micro crack detection with Dijkstra’s shortest path algorithm 总被引:2,自引:0,他引:2
Christina?Gunkel Alexander?Stepper Arne?C.?Müller Christine?H.?Müller "author-information "> "author-information__contact u-icon-before "> "mailto:cmueller@statistik.tu-dortmund.de " title= "cmueller@statistik.tu-dortmund.de " itemprop= "email " data-track= "click " data-track-action= "Email author " data-track-label= " ">Email author 《Machine Vision and Applications》2012,23(3):589-601
A package based on the free software R is presented which allows the automatic detection of micro cracks and corresponding statistical analysis of crack quantities. It uses a shortest path algorithm to detect micro cracks in situations where the cracks are surrounded by plastic deformations and where a discrimination between cracks and plastic deformations is difficult. In a first step, crack clusters are detected as connected components of pixels with values below a given threshold value. Then the crack paths are determined by Dijkstra’s algorithm as longest shortest paths through the darkest parts of the crack clusters. Linear parts of kinked paths can be identified with this. The new method was applied to over 2,000 images. Some statistical applications and a comparison with another free image tool are given. 相似文献
6.
A long-standing aim of quantum information research is to understand what gives quantum computers their advantage. This requires separating problems that need genuinely quantum resources from those for which classical resources are enough. Two examples of quantum speed-up are the Deutsch–Jozsa and Simon’s problem, both efficiently solvable on a quantum Turing machine, and both believed to lack efficient classical solutions. Here we present a framework that can simulate both quantum algorithms efficiently, solving the Deutsch–Jozsa problem with probability 1 using only one oracle query, and Simon’s problem using linearly many oracle queries, just as expected of an ideal quantum computer. The presented simulation framework is in turn efficiently simulatable in a classical probabilistic Turing machine. This shows that the Deutsch–Jozsa and Simon’s problem do not require any genuinely quantum resources, and that the quantum algorithms show no speed-up when compared with their corresponding classical simulation. Finally, this gives insight into what properties are needed in the two algorithms and calls for further study of oracle separation between quantum and classical computation. 相似文献
7.
There exist the complicated waveguide modes as well as the surface waves in the electromagnetic field induced by a horizontal
electric dipole in layered lossless dielectrics between two ground planes. In spectral domain, all these modes can be characterized
by the rational parts with the real poles of the vector and scalar potentials. The accurate extraction of these modes plays
an important role in the evaluation of the Green’s function in spatial domain. In this paper, a new algorithm based on rational
approximation is presented, which can accurately extract all the real poles and the residues of each pole simultaneously.
Thus, we can get all the surface wave modes and waveguide modes, which is of great help to the calculation of the spatial
domain Green’s function. The numerical results demonstrated the accuracy and efficiency of the proposed method. 相似文献
8.
《国际计算机数学杂志》2012,89(2-4):181-194
One of the most important problems in numerical analysis and numerical optimization is to solve a linear system of equations. Sometimes it should be repeated when one of the equations is replaced by a new one. In this paper as a result of theoretical analysis, an algorithm model and a particular algorithm which are based on the ABS class are proposed. After the original linear system has been solved by the ABS class, the algorithms proposed here can efficiently solve the new system which is obtained from the original system by replacing one of its equations by using information obtained in the previous computation. These algorithms can be used continually when some equations of the original system are replaced by new equations successively with less computation effort. 相似文献
9.
An inverted pendulum is a sensitive system of highly coupled parameters, in laboratories, it is popular for modelling nonlinear systems such as mechanisms and control systems, and also for optimizing programmes before those programmes are applied in real situations. This study aims to find the optimum input setting for a double inverted pendulum (DIP), which requires an appropriate input to be able to stand and to achieve robust stability even when the system model is unknown. Such a DIP input could be widely applied in engineering fields for optimizing unknown systems with a limited budget. Previous studies have used various mathematical approaches to optimize settings for DIP, then have designed control algorithms or physical mathematical models. This study did not adopt a mathematical approach for the DIP controller because our DIP has five input parameters within its nondeterministic system model. This paper proposes a novel algorithm, named UniNeuro, that integrates neural networks (NNs) and a uniform design (UD) in a model formed by input and response to the experimental data (metamodel). We employed a hybrid UD multiobjective genetic algorithm (HUDMOGA) for obtaining the optimized setting input parameters. The UD was also embedded in the HUDMOGA for enriching the solution set, whereas each chromosome used for crossover, mutation, and generation of the UD was determined through a selection procedure and derived individually. Subsequently, we combined the Euclidean distance and Pareto front to improve the performance of the algorithm. Finally, DIP equipment was used to confirm the settings. The proposed algorithm can produce 9 alternative configured input parameter values to swing-up then standing in robust stability of the DIP from only 25 training data items and 20 optimized simulation results. In comparison to the full factorial design, this design can save considerable experiment time because the metamodel can be formed by only 25 experiments using the UD. Furthermore, the proposed algorithm can be applied to nonlinear systems with multiple constraints. 相似文献
10.
Sudipto Guha 《The VLDB Journal The International Journal on Very Large Data Bases》2008,17(6):1509-1535
Synopses construction algorithms have been found to be of interest in query optimization, approximate query answering and
mining, and over the last few years several good synopsis construction algorithms have been proposed. These algorithms have
mostly focused on the running time of the synopsis construction vis-a-vis the synopsis quality. However the space complexity
of synopsis construction algorithms has not been investigated as thoroughly. Many of the optimum synopsis construction algorithms
are expensive in space. For some of these algorithms the space required to construct the synopsis is significantly larger
than the space required to store the input. These algorithms rely on the fact that they require a smaller “working space”
and most of the data can be resident on disc. The large space complexity of synopsis construction algorithms is a handicap
in several scenarios. In the case of streaming algorithms, space is a fundamental constraint. In case of offline optimal or
approximate algorithms, a better space complexity often makes these algorithms much more attractive by allowing them to run
in main memory and not use disc, or alternately allows us to scale to significantly larger problems without running out of
space. In this paper, we propose a simple and general technique that reduces space complexity of synopsis construction algorithms.
As a consequence we show that the notion of “working space” proposed in these contexts is redundant. This technique can be
easily applied to many existing algorithms for synopsis construction problems. We demonstrate the performance benefits of
our proposal through experiments on real-life and synthetic data. We believe that our algorithm also generalizes to a broader
range of dynamic programs beyond synopsis construction.
Sudipto Guha’s research supported in part by an Alfred P. Sloan Research Fellowship and by NSF Awards CCF-0430376, CCF-0644119.A
preliminary version of the paper appeared as “Space efficiency in synopsis construction algorithms”, VLDB Conference 2005,
Trondheim, [19]. 相似文献
11.
R. Kragler 《Programming and Computer Software》2009,35(2):63-78
In this paper by means of computer experiment we study advantages and disadvantages of the heuristical method of “parallel integrator”. For this purpose we describe and use implementation of the method in Mathematica. In some cases we compare this implementation with the original one in Maple. The article is published in the original. 相似文献
12.
Calcolo - We derive results on equivalence of piecewise polynomial approximations of a given function in the Sobolev space $${varvec{H}}{mathrm{(curl)}}$$ . We namely show that the global-best... 相似文献
13.
14.
Goldreich (ECCC 2000) suggested a simple construction of a candidate one-way function f : {0, 1} n → {0, 1} m where each bit of output is a fixed predicate P of a constant number d of (random) input bits. We investigate the security of this construction in the regime m = Dn, where D(d) is a sufficiently large constant. We prove that for any predicate P that correlates with either one or two of its inputs, f can be inverted with high probability. 相似文献
15.
The initialisation of a neural network implementation of Sammon’s mapping, either randomly or based on the principal components (PCs) of the sample covariance matrix, is experimentally investigated. When PCs are employed, fewer experiments are needed and the network configuration can be set precisely without trial-and-error experimentation. Tested on five real-world databases, it is shown that very few PCs are required to achieve a shorter training period, lower mapping error and higher classification accuracy, compared with those based on random initialisation. Received: 20 April 1999, Received in revised form: 08 July 1999, Accepted: 05 August 1999 相似文献
16.
The flower pollination algorithm (FPA) is a recently proposed metaheuristic inspired by the natural phenomenon of flower pollination. Since its invention, this star-rising metaheuristic has gained a big interest in the community of metaheuristic optimisation. So, many works based on the FPA have already been proposed. However, these works have not given any deep analysis of the performances of the basic algorithm, neither of the variants already proposed. This makes it difficult to decide on the applicability of this new metaheuristic in real-world applications. This paper qualitatively and quantitatively analyses this metaheuristic. The qualitative analysis studies the basic variant of the FPA and some extensions of it. For quantitative analysis, the FPA is statistically examined through using it to solve the CEC 2013 benchmarks for real-parameter continuous optimisation, then by applying it on some of the CEC 2011 benchmarks for real-world optimisation problems. In addition, some extensions of the FPA, based on opposition-based learning and the modification of the movement equation in the global-pollination operator, are presented and also analysed in this paper. On the whole, the basic FPA has been found to offer less than average performances when compared to state-of-the-art algorithms, and the best of the proposed extensions has reached average results. 相似文献
17.
18.
19.
Michał Boczek Marek Kaluszka 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2016,20(7):2513-2519
We present a Carlson type inequality for the Sugeno integral and a much wider class of functions than the comonotone functions. We also provide three Carlson type inequalities for the Choquet integral. Our inequalities generalize many known results. 相似文献
20.
Detection of the non-topology preservation of Ma and Sonka’s algorithm,by the use of P-simple points
《Computer Vision and Image Understanding》2010,114(3):384-399
In 1996, Ma and Sonka proposed a thinning algorithm which yields curve skeletons for 3D binary images [C. Ma, M. Sonka, A fully parallel 3D thinning algorithm and its applications, Comput. Vis. Image Underst. 64 (3) (1996) 420–433]. This algorithm is one of the most referred thinning algorithms in the context of digital topology: either by its use in medical applications or for comparisons with other thinning algorithms.In 2007, Wang and Basu [T. Wang, A. Basu, A note on ‘a fully parallel 3D thinning algorithm and its applications’, Pattern Recognit. Lett. 28 (4) (2007) 501–506] wrote a paper in which they claim that Ma and Sonka’s 3D thinning algorithm does not preserve topology. As they highlight in their paper, a counter-example was given in 2001, in Lohou’s thesis [C. Lohou, Contribution à l’analyse topologique des images: étude d’algorithmes de squelettisation pour images 2D et 3D selon une approche topologie digitale ou topologie discrète. Ph.D. thesis, University of Marne-la-Vallée, France, 2001].In this paper, it is shown how P-simple points have guided the author towards a proof that Ma and Sonka’s algorithm does not always preserve topology. Moreover, the reasoning being very general, it could be reused for such a purpose, i.e., to simplify the proof on the non-topology preservation. 相似文献