首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
We show that the absolute worst case time complexity for Hopcroft’s minimization algorithm applied to unary languages is reached only for deterministic automata or cover automata following the structure of the de Bruijn words. A previous paper by Berstel and Carton gave the example of de Bruijn words as a language that requires O(nlogn) steps in the case of deterministic automata by carefully choosing the splitting sets and processing these sets in a FIFO mode for the list of the splitting sets in the algorithm. We refine the previous result by showing that the Berstel/Carton example is actually the absolute worst case time complexity in the case of unary languages for deterministic automata. We show that the same result is valid also for the case of cover automata and an algorithm based on the Hopcroft’s method used for minimization of cover automata. We also show that a LIFO implementation for the splitting list will not achieve the same absolute worst time complexity for the case of unary languages both in the case of regular deterministic finite automata or in the case of the deterministic finite cover automata as defined by S. Yu.  相似文献   

3.
Micro crack detection with Dijkstra’s shortest path algorithm   总被引:2,自引:0,他引:2  
A package based on the free software R is presented which allows the automatic detection of micro cracks and corresponding statistical analysis of crack quantities. It uses a shortest path algorithm to detect micro cracks in situations where the cracks are surrounded by plastic deformations and where a discrimination between cracks and plastic deformations is difficult. In a first step, crack clusters are detected as connected components of pixels with values below a given threshold value. Then the crack paths are determined by Dijkstra’s algorithm as longest shortest paths through the darkest parts of the crack clusters. Linear parts of kinked paths can be identified with this. The new method was applied to over 2,000 images. Some statistical applications and a comparison with another free image tool are given.  相似文献   

4.
A long-standing aim of quantum information research is to understand what gives quantum computers their advantage. This requires separating problems that need genuinely quantum resources from those for which classical resources are enough. Two examples of quantum speed-up are the Deutsch–Jozsa and Simon’s problem, both efficiently solvable on a quantum Turing machine, and both believed to lack efficient classical solutions. Here we present a framework that can simulate both quantum algorithms efficiently, solving the Deutsch–Jozsa problem with probability 1 using only one oracle query, and Simon’s problem using linearly many oracle queries, just as expected of an ideal quantum computer. The presented simulation framework is in turn efficiently simulatable in a classical probabilistic Turing machine. This shows that the Deutsch–Jozsa and Simon’s problem do not require any genuinely quantum resources, and that the quantum algorithms show no speed-up when compared with their corresponding classical simulation. Finally, this gives insight into what properties are needed in the two algorithms and calls for further study of oracle separation between quantum and classical computation.  相似文献   

5.
There exist the complicated waveguide modes as well as the surface waves in the electromagnetic field induced by a horizontal electric dipole in layered lossless dielectrics between two ground planes. In spectral domain, all these modes can be characterized by the rational parts with the real poles of the vector and scalar potentials. The accurate extraction of these modes plays an important role in the evaluation of the Green’s function in spatial domain. In this paper, a new algorithm based on rational approximation is presented, which can accurately extract all the real poles and the residues of each pole simultaneously. Thus, we can get all the surface wave modes and waveguide modes, which is of great help to the calculation of the spatial domain Green’s function. The numerical results demonstrated the accuracy and efficiency of the proposed method.  相似文献   

6.
An inverted pendulum is a sensitive system of highly coupled parameters, in laboratories, it is popular for modelling nonlinear systems such as mechanisms and control systems, and also for optimizing programmes before those programmes are applied in real situations. This study aims to find the optimum input setting for a double inverted pendulum (DIP), which requires an appropriate input to be able to stand and to achieve robust stability even when the system model is unknown. Such a DIP input could be widely applied in engineering fields for optimizing unknown systems with a limited budget. Previous studies have used various mathematical approaches to optimize settings for DIP, then have designed control algorithms or physical mathematical models. This study did not adopt a mathematical approach for the DIP controller because our DIP has five input parameters within its nondeterministic system model. This paper proposes a novel algorithm, named UniNeuro, that integrates neural networks (NNs) and a uniform design (UD) in a model formed by input and response to the experimental data (metamodel). We employed a hybrid UD multiobjective genetic algorithm (HUDMOGA) for obtaining the optimized setting input parameters. The UD was also embedded in the HUDMOGA for enriching the solution set, whereas each chromosome used for crossover, mutation, and generation of the UD was determined through a selection procedure and derived individually. Subsequently, we combined the Euclidean distance and Pareto front to improve the performance of the algorithm. Finally, DIP equipment was used to confirm the settings. The proposed algorithm can produce 9 alternative configured input parameter values to swing-up then standing in robust stability of the DIP from only 25 training data items and 20 optimized simulation results. In comparison to the full factorial design, this design can save considerable experiment time because the metamodel can be formed by only 25 experiments using the UD. Furthermore, the proposed algorithm can be applied to nonlinear systems with multiple constraints.  相似文献   

7.
《国际计算机数学杂志》2012,89(2-4):181-194
One of the most important problems in numerical analysis and numerical optimization is to solve a linear system of equations. Sometimes it should be repeated when one of the equations is replaced by a new one. In this paper as a result of theoretical analysis, an algorithm model and a particular algorithm which are based on the ABS class are proposed. After the original linear system has been solved by the ABS class, the algorithms proposed here can efficiently solve the new system which is obtained from the original system by replacing one of its equations by using information obtained in the previous computation. These algorithms can be used continually when some equations of the original system are replaced by new equations successively with less computation effort.  相似文献   

8.
Synopses construction algorithms have been found to be of interest in query optimization, approximate query answering and mining, and over the last few years several good synopsis construction algorithms have been proposed. These algorithms have mostly focused on the running time of the synopsis construction vis-a-vis the synopsis quality. However the space complexity of synopsis construction algorithms has not been investigated as thoroughly. Many of the optimum synopsis construction algorithms are expensive in space. For some of these algorithms the space required to construct the synopsis is significantly larger than the space required to store the input. These algorithms rely on the fact that they require a smaller “working space” and most of the data can be resident on disc. The large space complexity of synopsis construction algorithms is a handicap in several scenarios. In the case of streaming algorithms, space is a fundamental constraint. In case of offline optimal or approximate algorithms, a better space complexity often makes these algorithms much more attractive by allowing them to run in main memory and not use disc, or alternately allows us to scale to significantly larger problems without running out of space. In this paper, we propose a simple and general technique that reduces space complexity of synopsis construction algorithms. As a consequence we show that the notion of “working space” proposed in these contexts is redundant. This technique can be easily applied to many existing algorithms for synopsis construction problems. We demonstrate the performance benefits of our proposal through experiments on real-life and synthetic data. We believe that our algorithm also generalizes to a broader range of dynamic programs beyond synopsis construction. Sudipto Guha’s research supported in part by an Alfred P. Sloan Research Fellowship and by NSF Awards CCF-0430376, CCF-0644119.A preliminary version of the paper appeared as “Space efficiency in synopsis construction algorithms”, VLDB Conference 2005, Trondheim, [19].  相似文献   

9.
In this paper we present a review for the construction of variable-step methods for the numerical integration of the Schr?dinger equation. Phase-lag and stability are investigated. The methods are variable-step because of a simple natural error control mechanism. Numerical results obtained for coupled differential equations arising from the Schr?dinger equation and for the wave equation show the validity of the approach presented.  相似文献   

10.
People who suffer from Parkinson’s Disease face many challenges using computers, and mice are particularly problematic input devices. This article describes usability tests of standard peripherals for use by people with Parkinson’s Disease in order to identify optimal combinations with respect to the needs of this user group. The results are used to determine their effect upon inertia, muscle stiffness, tremor, pain, strain and coordination and show that widely available equipment could significantly improve mouse pointer control for many users. The results reflect the diversity of challenges experienced by computer users with Parkinson’s Disease, and also illustrate how projector-based technology may improve computer interaction without risking strain injuries.  相似文献   

11.
D. R. Simon stated a problem, so-called Simons problem, whose computational complexity is in the class BQP but not in BPP, where is the function or oracle given in the problem. This result indicates that BPP may be strictly included in its quantum counterpart, BQP. Later, G. Brassard and P. Høyer showed that Simons problem and its extended version can be solved by a deterministic polynomial time quantum algorithm. That is, these problems are in the class EQP. In this paper, we show that Simons problem and its extended version can be deterministically solved in a simpler and more concrete way than that proposed by G. Brassard and P. Høyer.  相似文献   

12.
13.
14.
We present several results on the complexity of various forms of Sperner’s Lemma in the black-box model of computing. We give a deterministic algorithm for Sperner problems over pseudo-manifolds of arbitrary dimension. The query complexity of our algorithm is linear in the separation number of the skeleton graph of the manifold and the size of its boundary. As a corollary we get an deterministic query algorithm for the black-box version of the problem 2D-SPERNER, a well studied member of Papadimitriou’s complexity class PPAD. This upper bound matches the deterministic lower bound of Crescenzi and Silvestri. The tightness of this bound was not known before. In another result we prove for the same problem an lower bound for its probabilistic, and an lower bound for its quantum query complexity, showing that all these measures are polynomially related. Research supported by the European Commission IST Integrated Project Qubit Application (QAP) 015848, the OTKA grants T42559 and T46234, and by the ANR Blanc AlgoQP grant of the French Research Ministry.  相似文献   

15.
16.
A recursive algorithm is adopted for the computation of dyadic Green's functions in three-dimensional stratified uniaxial anisotropic media with arbitrary number of layers. Three linear equation groups for computing the coefficients of the Sommerfeld integrals are obtained according to the continuity condition of electric and magnetic fields across the interface between different layers, which are in correspondence with the TM wave produced by a vertical unit electric dipole and the TE or TM wave produced by a horizontal unit electric dipole, respectively. All the linear equation groups can be solved via the recursive algorithm. The dyadic Green's functions with source point and field point being in any layer can be conveniently obtained by merely changing the position of the elements within the source term of the linear equation groups. The problem of singularities occurring in the Sommerfeld integrals is efficiently solved by deforming the integration path in the complex plane. The expression of the dyadic Green's functions provided by this paper is terse in form and is easy to be programmed, and it does not overflow. Theoretical analysis and numerical examples show the accuracy and effectivity of the algorithm.  相似文献   

17.
In this paper, we establish a general theorem for iteration functions in a cone normed space over \({{\mathbb {R}}}^n\). Using this theorem together with a general convergence theorem of Proinov (J Complex 33:118–144, 2016), we obtain a local convergence theorem with a priori and a posteriori error estimates as well as a theorem under computationally verifiable initial conditions for the Schröder’s iterative method considered as a method for simultaneous computation of polynomial zeros of unknown multiplicity. Numerical examples which demonstrate the convergence properties of the proposed method are also provided.  相似文献   

18.
In this paper we establish some new generalizations and refinements of Hölder’s inequality and some related inequalities. We also show that many existing inequalities related to the Hölder inequality are special cases of the inequalities presented.  相似文献   

19.
Recall that Lebesgue’s singular function L(t) is defined as the unique solution to the equation L(t) = qL(2t) + pL(2t ? 1), where p, q > 0, q = 1 ? p, pq. The variables M n = ∫01t n dL(t), n = 0,1,… are called the moments of the function The principal result of this work is \({M_n} = {n^{{{\log }_2}p}}{e^{ - \tau (n)}}(1 + O({n^{ - 0.99}}))\), where the function τ(x) is periodic in log2x with the period 1 and is given as \(\tau (x) = \frac{1}{2}1np + \Gamma '(1)lo{g_2}p + \frac{1}{{1n2}}\frac{\partial }{{\partial z}}L{i_z}( - \frac{q}{p}){|_{z = 1}} + \frac{1}{{1n2}}\sum\nolimits_{k \ne 0} {\Gamma ({z_k})L{i_{{z_k} + 1}}( - \frac{q}{p})} {x^{ - {z_k}}}\), \({z_k} = \frac{{2\pi ik}}{{1n2}}\), k ≠ 0. The proof is based on poissonization and the Mellin transform.  相似文献   

20.
Herman’s self-stabilisation algorithm provides a simple randomised solution to the problem of recovering from faults in an N-process token ring. However, a precise analysis of the algorithm’s maximum execution time proves to be surprisingly difficult. McIver and Morgan have conjectured that the worst-case behaviour results from a ring configuration of three evenly spaced tokens, giving an expected time of approximately 0.15N 2. However, the tightest upper bound proved to date is 0.64N 2. We apply probabilistic verification techniques, using the probabilistic model checker PRISM, to analyse the conjecture, showing it to be correct for all sizes of the ring that can be exhaustively analysed. We furthermore demonstrate that the worst-case execution time of the algorithm can be reduced by using a biased coin.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号