首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Computing an optimal solution to the knapsack problem is known to be NP-hard. Consequently, fast parallel algorithms for finding such a solution without using an exponential number of processors appear unlikely. An attractive alternative is to compute an approximate solution to this problem rapidly using a polynomial number of processors. In this paper, we present an efficient parallel algorithm for finding approximate solutions to the 0–1 knapsack problem. Our algorithm takes an , 0 < < 1, as a parameter and computes a solution such that the ratio of its deviation from the optimal solution is at most a fraction of the optimal solution. For a problem instance having n items, this computation uses O(n5/2/3/2) processors and requires O(log3n + log2nlog(1/)) time. The upper bound on the processor requirement of our algorithm is established by reducing it to a problem on weighted bipartite graphs. This processor complexity is a significant improvement over that of other known parallel algorithms for this problem.  相似文献   

2.
We consider the problem where π is an unknown permutation on {0,1,…,2n−1}, y0{0,1,…,2n−1}, and the goal is to determine the minimum r>0 such that πr(y0)=1. Information about π is available only via queries that yield πx(y) from any x{0,1,…,2m−1} and y{0,1,…,2n−1} (where m is polynomial in n). The main resource under consideration is the number of these queries. We show that the number of queries necessary to solve the problem in the classical probabilistic bounded-error model is exponential in n. This contrasts sharply with the quantum bounded-error model, where a constant number of queries suffices.  相似文献   

3.
For a set of n jobs with deterministic processing times and common starting times, the problem is to determine the optimal constant flow allowance k* for the CON due-date assignment method and the optimal sequence σ* which minimizes the weighted average of missed due-dates. As k* and σ* cannot be independently determined, we propose an algorithm which systematically searches for the optimal solution. Although the algorithm has time complexity of 0(2nn2), it is considerably more efficient than the exhaustive enumeration method which considers all n! possible sequences.  相似文献   

4.
We consider the following boundary value problem, (−1)n−1yΔn(t)=(−1)p+1F(t,y(σn−1(t))),t[a,b]∩T, yΔn(a)=0,0≤ip−1, yΔn(σ(b))=0,pin−1,where n ≥ 2, 1 ≤ pn - 1 is fixed and T is a time scale. By applying fixed-point theorems for operators on a cone, existence criteria are developed for triple positive solutions of the boundary value problem. We also include examples to illustrate the usefulness of the results obtained.  相似文献   

5.
Summary Not every unambiguous regular grammar can be parsed by a finite state machine, even if a lookahead facility is added to the machine's capabilities. Those which can be parsed with a fixed lookahead of k are said to be FL(k). If such a grammar has n non-terminals, it never needs more than (n(n–1)/2) + 1 lookahead, and there exist grammars which do require this much. An algorithm is presented for determining whether a grammar is fixed lookahead parsable, and if so, for finding the minimum lookahead needed. The algorithm sets up and solves a set of O(n2) equations using O(n4) steps. Two parsing methods for FL(k) grammars are discussed. One uses a large precomputed parsing table, and operates in real time. The second parses an input string in time proportional to its length, while using approximately 3n storage locations.  相似文献   

6.
The prism machine is a stack of n cellular arrays, each of size 2n × 2n. Cell (i, j) on level k is connected to cells (i, j), (i + 2k, j), and (i, j + 2k) on level k + 1, 1 ≤ k < n, where the sums are modulo 2n. Such a machine can perform various operations (e.g., “Gaussian” convolutions or least-squares polynomial fits) on image neighborhoods of power-of-2 sizes in every position in O(n) time, unlike a pyramid machine which can do this only in sampled positions. It can also compute the discrete Fourier transform in O(n) time. It consists of n · 4n cells, while a pyramid consists of fewer than 4n+1/3 cells; but in practice n would be at most 10, so that a prism would be at most about seven times as large as a pyramid.  相似文献   

7.
We consider the problem of finding the extrema of a distributed multiset in a ring, that is, of determining the minimum and the maximum values,xminandxmax, of a multisetX= {x0,x2, ...,xn−1} whose elements are drawn from a totally ordered universeUand stored at thenentities of a ring network. This problem is unsolvable if the ring size is not known to the entities, and it has complexity Θ(n2) in the case of asynchronous rings of known size. We show that, in synchronous rings of known size, this problem can always be solved inO((c+ logn) ·n) bits andO(n·c·x1/c) time for any integerc> 0, wherex= Max{|xmin|, |xmax|}. The previous solutions requiredO(n2) bits and the same amount of time. Based on these results, we also present a bit-optimal solution to the problem of finding the multiplicity of the extrema.  相似文献   

8.
In this paper, a high-speed, low-cost and efficient design of reverse converter for the general three-moduli set {2α, 2β − 1, 2β + 1} where α < β is presented. The simple proposed architecture consists of a carry save adder (CSA) and a modulo adder. As a result it can be efficiently implemented in VLSI circuits. The values of α and β are set in order to provide the desired dynamic range and also to obtain a balanced moduli set. Based on the above, two new moduli sets {2n+k, 22n − 1, 22n + 1} and {22n−1, 22n+1 − 1, 22n+1 + 1}, which are the special cases of the moduli set {2α, 2β − 1, 2β + 1} are proposed. The reverse converters for these new moduli sets are derived from the proposed general architecture with better performance compared to the other reverse converters for moduli sets with similar dynamic range.  相似文献   

9.
This paper presents efficient hypercube algorithms for solving triangular systems of linear equations by using various matrix partitioning and mapping schemes. Recently, several parallel algorithms have been developed for this problem. In these algorithms, the triangular solver is treated as the second stage of Gauss elimination. Thus, the triangular matrix is distributed by columns (or rows) in a wrap fashion since it is likely that the matrix is distributed this way after an LU decomposition has been done on the matrix. However, the efficiency of the algorithms is low. Our motivation is to develop various data partitioning and mapping schemes for hypercube algorithms by treating the triangular solver as an independent problem. Theoretically, the computation time of our best algorithm is ((12p + 1)n2 + 36p3 − 28p2)/(24p2), and an upper bound on the communication time is 2αp log p (log n − log p) + 2α(log n − log p − 1) log p + (cn/p − 2c)(2 log p − 1) + log p(cnc − α), where α is the (communication startup time)/(one entry scanning time), c is a constant, n is the order of the triangular system and p is the number of nodes in the hypercube. Experimental results show that the algorithm is efficient. The efficiency of the algorithm is 0.945 when p = 2, n = 513, and 0.93 when p = 8, n = 1025.  相似文献   

10.
We study the problem of scheduling a parallel computation so as to minimize the maximum number of data items extant at any point in the execution. Computations are expressed as directed graphs, where nodes represent primitive operations and arcs represent data dependences. The result of an operation is extant after the operation executes and until all immediate successors have begun execution. Our goal is to schedule computations so as to minimize both the maximum space required for extant data and the overall completion time.The classical problem of multiprocessor scheduling with precedence constraints is a special case of our problem, obtained by disregarding the data-space constraint. This special case is NP-complete for general graphs; a time-optimal multiprocessor scheduling algorithm is known only for the class of arbitrary trees. For this same class of arbitrary trees we present a multiprocesssor scheduling algorithm where the completion time is optimal within a constant factor, while the data-space size exceeds the optimal by a factor not greater than the number of processors.For an arbitrary n-node precedence tree T of in-degree Δ, we present:
(1)an algorithm for evaluating the lower bound on the size of data space required for executing T, regardless of the completion time or number of processors;
(2)a proof that the lower bound of Part 1 may be as large as (Δ−1)lgn but not larger;
(3)a single-processor schedule that executes T in time that equals the optimal, while creating the data space of size equal to the lower bound of Part 1;
(4)an ω-processor schedule that executes T in time not exceeding three times the optimal, while creating the data space of size that exceeds the lower bound of Part 1 by a factor not greater than ω.
(5)a proof that for every number of processors ω and for every 0<ε1, there exist infinitely many trees such that every ω-processor schedule that executes any of these trees in time not exceeding (2−ε) times the optimal requires a token space as large as that created by the schedule of Part 4, while the schedule of Part 4 executes every such tree in optimal time.
The family of complete binary trees provides an example where our schedule achieves an exponential improvement in the size of the data space, compared to that of the classical time-optimal schedule.  相似文献   

11.
We provide a uniform framework for the study of index data structures for a two-dimensional matrixTEXT[1:n, 1:n] whose entries are drawn from an ordered alphabetΣ. An index forTEXTcan be informally seen as the two-dimensional analog of the suffix tree for a string. It allows on-line searches and statistics to be performed onTEXTby representing compactly theΘ(n3) square submatrices ofTEXTin optimalO(n2) space. We identify 4n−1families of indices forTEXT, each containing ∏ni=1 (2i−1)! isomorphic data structures. We also develop techniques leading to a single algorithm that efficiently builds any index in any family inO(n2 log n) time andO(n2) space. Such an algorithm improves in various respects the algorithms for the construction of the PAT tree and the Lsuffix tree. The framework and the algorithm easily generalize tod>2 dimensions. Moreover, as part of our algorithm, we provide new algorithmic tools that yield a space-efficient implementation of the “naming scheme” of R. Karpet al.(in“Proceedings, Fourth Symposium on Theory of Computing,” pp. 125–136) for strings and matrices.  相似文献   

12.
In this paper, we present decision procedures for the coverability, the subword, the containment, and the equivalence problems for commutative semigroups. These procedures require at most space 2c·n, where n is the size of the problem instance, and c is some problem independent constant. Furthermore, we show that the exponential space hardness of the above problems follows from the work of Mayr and Meyer. Thus, the presented algorithms are space optimal. Our results close the gap between the 2c′·n·log n space upper bound, shown by Rackoff for the coverability problem and shown by Huynh for the containment and the equivalence problems, and the exponential space lower bound resulting from the corresponding bound for the uniform word problem established by Mayr and Meyer.  相似文献   

13.
In a recent paper [Theoretical Computer Science 363, 257–265], He, Zhong and Gu considered the non-resumable case of the scheduling problem with a fixed non-availability interval under the non-resumable scenario. They proposed a polynomial time approximation scheme (PTAS) to minimize the total completion time.In this paper, we propose a fully polynomial-time approximation scheme to minimize the total weighted completion time. The FPTAS has O(n2/ε2) time complexity, where n is the number of jobs and ε is the required error bound. The proposed FPTAS outperforms all the previous approximation algorithms designed for this problem and its running time is strongly polynomial.  相似文献   

14.
We consider the problem of periodic exploration of all nodes in undirected graphs by using a finite state automaton called later a robot. The robot, using a constant number of states (memory bits), must be able to explore any unknown anonymous graph. The nodes in the graph are neither labelled nor coloured. However, while visiting a node v the robot can distinguish between edges incident to it. The edges are ordered and labelled by consecutive integers 1,…,d(v) called port numbers, where d(v) is the degree of v. Periodic graph exploration requires that the automaton has to visit every node infinitely many times in a periodic manner. In this paper, we are interested in minimisation of the length of the exploration period. In other words, we want to minimise the maximum number of edge traversals performed by the robot between two consecutive visits of a generic node, in the same state and entering the node by the same port. Note that the problem is unsolvable if the local port numbers are set arbitrarily, see [L. Budach, Automata and labyrinths, Math. Nachr. 86 (1978) 195–282]. In this context, we are looking for the minimum function π(n), such that, there exists an efficient deterministic algorithm for setting the local port numbers allowing the robot to explore all graphs of size n along a traversal route with the period π(n). Dobrev et al. proved in [S. Dobrev, J. Jansson, K. Sadakane, W.-K. Sung, Finding short right-hand-on-the-wall walks in graphs, in: Proc. 12th Colloquium on Structural Information and Communication Complexity, SIROCCO 2005, in: Lecture Notes in Comput. Sci., vol. 3499, Springer, Berlin, 2005, pp. 127–139] that for oblivious robots π(n)10n. Recently Ilcinkas proposed another port labelling algorithm for robots equipped with two extra memory bits, see [D. Ilcinkas, Setting port numbers for fast graph exploration, in: Proc. 13th Colloquium on Structural Information and Communication Complexity, SIROCCO 2006, in: Lecture Notes in Comput. Sci., vol. 4056, Springer, Berlin, 2006, pp. 59–69], where the exploration period π(n)4n−2. In the same paper, it is conjectured that the bound 4nO(1) is tight even if the use of larger memory is allowed. In this paper, we disprove this conjecture presenting an efficient deterministic algorithm arranging the port numbers, such that, the robot equipped with a constant number of bits is able to complete the traversal period in π(n)<3.75n−2 steps hence decreasing the existing upper bound. This reduces the gap with the lower bound of π(n)2n−2 holding for any robot.  相似文献   

15.
16.
In this paper, we study the sample complexity of weak learning. That is, we ask how many data must be collected from an unknown distribution in order to extract a small but significant advantage in prediction. We show that it is important to distinguish between those learning algorithms that output deterministic hypotheses and those that output randomized hypotheses. We prove that in the weak learning model, any algorithm using deterministic hypotheses to weakly learn a class of Vapnik-Chervonenkis dimension d(n) requires Ω ([formula]) examples. In contrast, when randomized hypotheses are allowed, we show that Θ (1) examples suffice in some cases. We then show that there exists an efficient algorithm using deterministic hypotheses that weakly learns against any distribution on a set of size d(n) with only O(d(n)2/3) examples. Thus for the class of symmetric Boolean functions over n variables, where the strong learning sample complexity is Θ (n), the sample complexity for weak learning using deterministic hypotheses is Ω ([formula]) and O(n2/3), and the sample complexity for weak learning using randomized hypotheses is Θ (1). Next we prove the existence of classes for which the distribution-free sample size required to obtain a slight advantage in prediction over random guessing is essentially equal to that required to obtain arbitrary accuracy. Finally, for a class of small circuits, namely all parity functions of subsets of n Boolean variables, we prove a weak learning sample complexity of Θ(n). This bound holds even if the weak learning algorithm is allowed to replace random sampling with membership queries, and the target distribution is uniform on {0, 1}n.  相似文献   

17.
In this paper we compare the size distortions and powers for Pearson’s χ2-statistic, likelihood ratio statistic LR, score statistic SC and two statistics, which we call UT and VT here, proposed by [Potthoff, R.F., Whittinghill, M., 1966. Testing for homogeneity: II. The Poisson distribution. Biometrika 53, 183–190] for testing the equality of the rates of K Poisson processes. Asymptotic tests and parametric bootstrap tests are considered. It is found that the asymptotic UT test is too conservative to be recommended, while the other four asymptotic tests perform similarly and their powers are close to those of their parametric bootstrap counterparts when the observed counts are large enough. When the observed counts are not large, Monte Carlo simulation suggested that the asymptotic test using SC, LR and UT statistics are discouraged; none of the parametric bootstrap tests with the five statistics considered here is uniformly best or worst, and the asymptotic tests using Pearson’s χ2 and VT always have similar powers to their bootstrap counterparts. Thus, the asymptotic Pearson’s χ2 and VT tests have an advantage over all five parametric bootstrap tests in terms of their computational simplicity and convenience, and over the other four asymptotic tests in terms of their powers and size distortions.  相似文献   

18.
Let n be a positive integer, and suppose is its prime factorization. Let , so that n/θ(n) is the largest squarefree factor of n. We show that π is deterministic polynomial time Turing reducible to , the Euler function. We also show that θ is reducible to λ, the Carmichael function. We survey other recent work on computing the square part of an integer and give upper and lower bounds on the complexity of solving the problem.  相似文献   

19.
We show an O(1.344n)=O(20.427n) algorithm for edge-coloring an n-vertex graph using three colors. Our algorithm uses polynomial space. This improves over the previous O(2n/2) algorithm of Beigel and Eppstein [R. Beigel, D. Eppstein, 3-coloring in time O(1.3289n), J. Algorithms 54 (2) (2005) 168–204.]. We apply a very natural approach of generating inclusion–maximal matchings of the graph. The time complexity of our algorithm is estimated using the “measure and conquer” technique.  相似文献   

20.
This paper addresses the problem of determining the number of connected components of the space of real scalar M×N Hankel matrices of rank n. For n<min{M,N} it is shown that the space has n+1 components for M+N even, and is connected for M+N odd. As an application, the number of components of sets of minimal partial realizations of McMillan degree nτ of length τ sequences is determined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号