首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Ali Ayad 《Computing》2010,89(1-2):45-68
This paper presents a new algorithm for computing absolutely irreducible components of n-dimensional algebraic varieties defined implicitly by parametric homogeneous polynomial equations over ${\mathbb{Q}}$ , the field of rational numbers. The algorithm computes a finite partition of the parameters space into constructible sets such that the absolutely irreducible components are given uniformly in each constructible set. Each component will be represented by two items: first by a parametric representative system, i.e., the equations that define the component and second by a parametric effective generic point which gives a parametric rational univariate representation of the elements of the component. The number of absolutely irreducible components is constant in each constructible set. The complexity bound of this algorithm is ${\delta^{O(r^4)}d^{r^4d^{O(n^3)}}}$ , being double exponential in n, where d (resp. δ) is an upper bound on the degrees of the input parametric polynomials w.r.t. the main n variables (resp. w.r.t. r parameters).  相似文献   

3.
We consider the Rosenfeld–Gröbner algorithm for computing a regular decomposition of a radical differential ideal generated by a set of ordinary differential polynomials in nn indeterminates. For a set of ordinary differential polynomials FF, let M(F)M(F) be the sum of maximal orders of differential indeterminates occurring in FF. We propose a modification of the Rosenfeld–Gröbner algorithm, in which for every intermediate polynomial system FF, the bound M(F)?(n−1)!M(F0)M(F)?(n1)!M(F0) holds, where F0F0 is the initial set of generators of the radical ideal. In particular, the resulting regular systems satisfy the bound. Since regular ideals can be decomposed into characterizable components algebraically, the bound also holds for the orders of derivatives occurring in a characteristic decomposition of a radical differential ideal.  相似文献   

4.
We describe a general algebraic formulation for a wide range of combinatorial problems including and In this formulation each problem instance is represented by a pair of relational structures, and the solutions to a given instance are homomorphisms between these relational structures. The corresponding decision problem consists of deciding whether or not any such homomorphisms exist. We then demonstrate that the complexity of solving this decision problem is determined in many cases by simple algebraic properties of the relational structures involved. This result is used to identify tractable subproblems of , and to provide a simple test to establish whether a given set of Boolean relations gives rise to one of these tractable subproblems.  相似文献   

5.
6.
We describe an algorithm for converting a characteristic set of a prime differential ideal from one ranking into another. This algorithm was implemented in many different languages and has been applied within various software and projects. It permitted to solve formerly unsolved problems.  相似文献   

7.
Consider a class of binary functions h: X→{ − 1, + 1} on an interval . Define the sample width of h on a finite subset (a sample) S ⊂ X as ω S (h) =  min x ∈ S |ω h (x)| where ω h (x) = h(x) max {a ≥ 0: h(z) = h(x), x − a ≤ z ≤ x + a}. Let be the space of all samples in X of cardinality ℓ and consider sets of wide samples, i.e., hypersets which are defined as Through an application of the Sauer-Shelah result on the density of sets an upper estimate is obtained on the growth function (or trace) of the class , β > 0, i.e., on the number of possible dichotomies obtained by intersecting all hypersets with a fixed collection of samples of cardinality m. The estimate is .   相似文献   

8.
The bounded ILP-consistency problem for function-free Horn clauses is described as follows. Given at setE + andE ? of function-free ground Horn clauses and an integerk polynomial inE +E ?, does there exist a function-free Horn clauseC with no more thank literals such thatC subsumes each element inE + andC does not subsume any element inE ?? It is shown that this problem is Σ 2 P complete. We derive some related results on the complexity of ILP and discuss the usefulness of such complexity results.  相似文献   

9.
10.
We present a linear-time algorithm in the algebraic computation tree model for checking whether two sets of integers are equal. The significance of this result is in the fact that it shows that set equality testing is computationally easier when the elements of the sets are restricted to be integers. In addition, we show a linear-time algorithm for checking set inclusion in a slightly extended computational model.  相似文献   

11.
12.
We consider certain counting problems involving colourings of graphs and independent sets in hypergraphs. Using polynomial interpolation techniques, we show that these problems are #P-complete. Therefore, efficient approximate counting is the most one can realistically expect to achieve. Rapidly mixing Markov chains which can be used for approximately solving some of these counting problems have been recently developed by the author and others. Received: June 19, 1998.  相似文献   

13.
We consider a scheduling problem where jobs have to be carried out by parallel identical machines. The attributes of a job j are: a fixed start time sj, a fixed finish time fj, and a resource requirement rj. Every machine owns R units of a renewable resource necessary to carry out jobs. A machine can process more than one job at a time, provided the resource consumption does not exceed R. The jobs must be processed in a non-preemptive way. Within this setting, the problem is to decide whether a feasible schedule for all jobs exists or not.We discuss such a decision problem and prove that it is strongly NP-complete even when the number of resources are fixed to any value R≥2. Moreover, we suggest an implicit enumeration algorithm which has O(nlogn) time complexity in the number n of jobs when the number m of machines and the number R of resources per machine are fixed.The role of storage layout and preemption are also discussed.  相似文献   

14.
We study the maximum-flow algorithm of Goldberg and Tarjan and show that the largest-label implementation runs inO(n 2 m) time. We give a new proof of this fact. We compare our proof with the earlier work by Cheriyan and Maheswari who showed that the largest-label implementation of the preflow-push algorithm of Goldberg and Tarjan runs inO(n 2 m) time when implemented with current edges. Our proof that the number of nonsaturating pushes isO(n 2 m), does not rely on implementing pushes with current edges, therefore it is true for a much larger family of largest-label implementation of the preflow-push algorithms.Research performed while the author was a Ph.D. student at Cornell University and was partially supported by the Ministry of Education of the Republic of Turkey through the scholarship program 1416.  相似文献   

15.
Paris Kanellakis and the second author (Smolka) were among the first to investigate the computational complexity of bisimulation, and the first and third authors (Moller and Srba) have long-established track records in the field. Smolka and Moller have also written a brief survey about the computational complexity of bisimulation [ACM Comput. Surv. 27(2) (1995) 287]. The authors believe that the special issue of Information and Computation devoted to PCK50: Principles of Computing and Knowledge: Paris C. Kanellakis Memorial Workshop represents an ideal opportunity for an up-to-date look at the subject.  相似文献   

16.
17.
The Quantum Adiabatic Algorithm has been proposed as a general purpose algorithm for solving hard optimization problems on a quantum computer. Early work on very small sizes indicated that the running time (complexity) only increased as a (quite small) power of the problem size N. We report results of Quantum Monte Carlo simulations, using parallel tempering, with which we determine the minimum energy gap (and hence get information the complexity) for much bigger sizes than was possible before. The aim is to see if there is a “crossover” to exponential complexity at large N. We present data for the typical (median) complexity as a function of N, which indicate a crossover to a first order transition at large sizes. This implies that the complexity is exponential at large N, at least for the problem studied.  相似文献   

18.
In this paper, metric complexities of certain classes of continuous-time systems are studied, using the time-domain sampling approach and the concepts of Kolmogorov, Gel'fand and sampling n-widths for certain classes of Sobolev space. A sampling theorem is obtained which extends Shannon's sampling theorem to systems with possibly non-band-limited spectra. The theorem demonstrates that continuous-time systems in certain Sobolev spaces can be approximately reconstructed causally from their sampled systems. The Kolmogorov, Gel'fand and sampling n-widths of various uncertainty sets in the Sobolev spaces are derived. The results show that the sampling approach is in fact asymptotically optimal, when the sampling interval is selected to minimize the loss of information in the sampling process, for the modelling of systems in such Sobolev spaces.  相似文献   

19.
《国际计算机数学杂志》2012,89(15):3330-3343
The concept of flexibility – originated in the context of heat exchanger networks design – is associated with a substructure which allows the same optimal value on the substructure (for example an optimal flow) as in the whole structure, for all the costs in a given range of costs. In this work, we extend the concept of flexibility to general combinatorial optimization problems, and prove several computational complexity results in this new framework. Under some monotonicity conditions, we prove that a combinatorial optimization problem can be polynomially reduced to its associated flexibility problem. However, the minimum cut, maximum weighted matching and shortest path problems have NP-complete associated flexibility problems. In order to obtain polynomial flexibility problems, we have to restrict ourselves to combinatorial optimization problems on matroids.  相似文献   

20.
The average complexity analysis for a formalism pertaining pairs of compatible sequences is presented. The analysis is done in two levels, so that an accurate estimate is achieved. The way of separating the candidate pairs into suitable classes of ternary sequences is interesting, allowing the use of fundamental tools of symbolic computation, such as holonomic functions and asymptotic analysis to derive an average complexity of for sequences of length n.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号