首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We investigate the complexity of the satisfiability problem of constraints over finite totally ordered domains. In our context, a clausal constraint is a disjunction of inequalities of the form xd and xd. We classify the complexity of constraints based on clausal patterns. A pattern abstracts away from variables and contains only information about the domain elements and the type of inequalities occurring in a constraint. Every finite set of patterns gives rise to a (clausal) constraint satisfaction problem in which all constraints in instances must have an allowed pattern. We prove that every such problem is either polynomially decidable or NP-complete, and give a polynomial-time algorithm for recognizing the tractable cases. Some of these tractable cases are new and have not been previously identified in the literature.  相似文献   

2.
3.
We study two natural extensions of Constraint Satisfaction Problems (CSPs). Balance-Max-CSP requires that in any feasible assignment each element in the domain is used an equal number of times. An instance of Hard-Max-CSP consists of soft constraints and hard constraints, and the goal is to maximize the weight of satisfied soft constraints while satisfying all the hard constraints. These two extensions contain many fundamental problems not captured by CSPs, and challenge traditional theories about CSPs in a more general framework. Max-2-SAT and Max-Horn-SAT are the only two nontrivial classes of Boolean CSPs that admit a robust satisfibiality algorithm, i.e., an algorithm that finds an assignment satisfying at least (1 ? g(ε)) fraction of constraints given a (1 ? ε)-satisfiable instance, where g(ε) → 0 as ε → 0, and g(0) = 0. We prove the inapproximability of these problems with balance or hard constraints, showing that each variant changes the nature of the problems significantly (in different ways). For instance, deciding whether an instance of 2-SAT admits a balanced assignment is NP-hard, and for Max-2-SAT with hard constraints, it is hard to find a constant-factor approximation even on (1 ? ε)-satisfiable instances (in particular, the version with hard constraints does not admit a robust satisfiability algorithm). We also study hardness results for a certain CSP over a larger domain capturing ordering constraints: we show that hard constraints rule out constant-factor approximation algorithms. All our hardness results are almost optimal — they completely rule out algorithms with certain properties, or can be matched by simple extensions to existing algorithms.  相似文献   

4.
We introduce a construction of a set of code sequences {Cn(m) : n ≥ 1, m ≥ 1} with memory order m and code length N(n). {Cn(m)} is a generalization of polar codes presented by Ar?kan in [1], where the encoder mapping with length N(n) is obtained recursively from the encoder mappings with lengths N(n ? 1) and N(n ? m), and {Cn(m)} coincides with the original polar codes when m = 1. We show that {Cn(m)} achieves the symmetric capacity I(W) of an arbitrary binary-input, discrete-output memoryless channel W for any fixed m. We also obtain an upper bound on the probability of block-decoding error Pe of {Cn(m)} and show that \({P_e} = O({2^{ - {N^\beta }}})\) is achievable for β < 1/[1+m(? ? 1)], where ? ∈ (1, 2] is the largest real root of the polynomial F(m, ρ) = ρm ? ρm ? 1 ? 1. The encoding and decoding complexities of {Cn(m)} decrease with increasing m, which proves the existence of new polar coding schemes that have lower complexity than Ar?kan’s construction.  相似文献   

5.
We study coset weight distributions of binary primitive (narrow-sense) BCH codes of length n = 2 m (m odd) with minimum distance 8. In the previous paper [1], we described coset weight distributions of such codes for cosets of weight j = 1, 2, 3, 5, 6. Here we obtain exact expressions for the number of codewords of weight 4 in terms of exponential sums of three types, in particular, cubic sums and Kloosterman sums. This allows us to find the coset distribution of binary primitive (narrow-sense) BCH codes with minimum distance 8 and also to obtain some new results on Kloosterman sums over finite fields of characteristic 2.  相似文献   

6.
In this paper we present a polynomial time approximation scheme for the most points covering problem. In the most points covering problem, n points in R 2, r>0, and an integer m>0 are given and the goal is to cover the maximum number of points with m disks with radius r. The dual of the most points covering problem is the partial covering problem in which n points in R 2 are given, and we try to cover at least pn points of these n points with the minimum number of disks. Both these problems are NP-hard. To solve the most points covering problem, we use the solution of the partial covering problem to obtain an upper bound for the problem and then we generate a valid solution for the most points covering problem by a careful modification of the partial covering solution. We first present an improved approximation algorithm for the partial covering problem which has a better running time than the previous algorithm for this problem. Using this algorithm, we attain a \((1 - \frac{{2\varepsilon }}{{1 +\varepsilon }})\)-approximation algorithm for the most points covering problem. The running time of our algorithm is \(O((1+\varepsilon )mn+\epsilon^{-1}n^{4\sqrt{2}\epsilon^{-1}+2}) \) which is polynomial with respect to both m and n, whereas the previously known algorithm for this problem runs in \(O(n \log n +n\epsilon^{-6m+6} \log (\frac{1}{\epsilon}))\) which is exponential regarding m.  相似文献   

7.
The Doob graph D(m, n), where m > 0, is a Cartesian product of m copies of the Shrikhande graph and n copies of the complete graph K 4 on four vertices. The Doob graph D(m, n) is a distance-regular graph with the same parameters as the Hamming graph H(2m + n, 4). We give a characterization of MDS codes in Doob graphs D(m, n) with code distance at least 3. Up to equivalence, there are m 3/36+7m 2/24+11m/12+1?(m mod 2)/8?(m mod 3)/9 MDS codes with code distance 2m + n in D(m, n), two codes with distance 3 in each of D(2, 0) and D(2, 1) and with distance 4 in D(2, 1), and one code with distance 3 in each of D(1, 2) and D(1, 3) and with distance 4 in each of D(1, 3) and D(2, 2).  相似文献   

8.
We show that the NP-hard optimization problems minimum and maximum weight exact satisfiability (XSAT) for a CNF formula C over n propositional variables equipped with arbitrary real-valued weights can be solved in O(||C||20.2441n ) time. To the best of our knowledge, the algorithms presented here are the first handling weighted XSAT optimization versions in non-trivial worst case time. We also investigate the corresponding weighted counting problems, namely we show that the number of all minimum, resp. maximum, weight exact satisfiability solutions of an arbitrarily weighted formula can be determined in O(n 2·||C||?+?20.40567n ) time. In recent years only the unweighted counterparts of these problems have been studied (Dahllöf and Jonsson, An algorithm for counting maximum weighted independent sets and its applications. In: Proceedings of the 13th ACM-SIAM Symposium on Discrete Algorithms, pp. 292–298, 2002; Dahllöf et al., Theor Comp Sci 320: 373–394, 2004; Porschen, On some weighted satisfiability and graph problems. In: Proceedings of the 31st Conference on Current Trends in Theory and Practice of Informatics (SOFSEM 2005). Lecture Notes in Comp. Science, vol. 3381, pp. 278–287. Springer, 2005).  相似文献   

9.
For the basic problem of scheduling a set of n independent jobs on a set of m identical parallel machines with the objective of maximizing the minimum machine completion time—also referred to as machine covering—we propose a new exact branch-and-bound algorithm. Its most distinctive components are a different symmetry-breaking solution representation, enhanced lower and upper bounds, and effective novel dominance criteria derived from structural patterns of optimal schedules. Results of a comprehensive computational study conducted on benchmark instances attest to the effectiveness of our approach, particularly for small ratios of n to m.  相似文献   

10.
We consider a geographic optimization problem in which we are given a region R, a probability density function f(?) defined on R, and a collection of n utility density functions u i (?) defined on R. Our objective is to divide R into n sub-regions R i so as to “balance” the overall utilities on the regions, which are given by the integrals \(\iint _{R_{i}}f(x)u_{i}(x)\, dA\). Using a simple complementary slackness argument, we show that (depending on what we mean precisely by “balancing” the utility functions) the boundary curves between optimal sub-regions are level curves of either the difference function u i (x) ? u j (x) or the ratio u i (x)/u j (x). This allows us to solve the problem of optimally partitioning the region efficiently by reducing it to a low-dimensional convex optimization problem. This result generalizes, and gives very short and constructive proofs of, several existing results in the literature on equitable partitioning for particular forms of f(?) and u i (?). We next give two economic applications of our results in which we show how to compute a market-clearing price vector in an aggregate demand system or a variation of the classical Fisher exchange market. Finally, we consider a dynamic problem in which the density function f(?) varies over time (simulating population migration or transport of a resource, for example) and derive a set of partial differential equations that describe the evolution of the optimal sub-regions over time. Numerical simulations for both static and dynamic problems confirm that such partitioning problems become tractable when using our methods.  相似文献   

11.
This article proposes a method to study M / E s / 1 / m, E r E s /1 / m, and E r / M / n / m queuing systems including the case when m = ∞. Recurrence relations are obtained to compute the stationary distribution of the number of customers in a system and its steady-state characteristics. The developed algorithms are tested on examples using simulation models constructed with the help of the GPSS World tools.  相似文献   

12.
 We present a first study concerning the optimization of a non linear fuzzy function f depending both on a crisp variable and a fuzzy number: therefore the function value is a fuzzy number. More specifically, given a real fuzzy number ?∈F and the function f(a,x):R 2R, we consider the fuzzy extension induced by f, f˜ : F × R → F, f˜(?,x) = Y˜. If K is a convex subset of R, the problem we consider is “maximizing”f˜(?,x), xˉ∈ K. The first problem is the meaning of the word “maximizing”: in fact it is well-known that ranking fuzzy numbers is a complex matter. Following a general method, we introduce a real function (evaluation function) on real fuzzy numbers, in order to get a crisp rating, induced by the order of the real line. In such a way, the optimization problem on fuzzy numbers can be written in terms of an optimization problem for the real-valued function obtained by composition of f with a suitable evaluation function. This approach allows us to state a necessary and sufficient condition in order that ∈K is the maximum for f˜ in K, when f(a,x) is convex-concave (Theorem 4.1).  相似文献   

13.
For any pair of integers r and m, 0 ≤ rm, we construct a class of quaternary linear codes whose binary images under the Gray map are codes with the parameters of the classical rth-order Reed-Muller code RM(r, m).  相似文献   

14.
The Hybrid Search for Minimal Perturbation Problems algorithm in Dynamic CSP (HS_MPP) (Zivan, Constraints, 16(3), 228–249, 2011) ensures for a given dynamic problem and its solution to the previous CSP, to find the optimal solution to the newly generated CSP. This proposed method exploits the fact that its reported solution must satisfy two requirements. First, that it is a solution for T complete assignment for the derived CSP and second, that it is as close as possible to the solution of the former CSP. Unfortunately, the pseudo-code of the algorithm in Zivan (Constraints, 16(3), 228–249, 2011) is confusing and may lead to an implementation in which HS_MPP may not perform the expected outcomes of a given instance of Dynamic CSPs correctly. In this erratum, we demonstrate the possible undesired outcomes and give corrections in HS_MPP’s pseudo-code.  相似文献   

15.
We consider the following scheduling problem. We have m identical machines, where each machine can accomplish one unit of work at each time unit. We have a set of n fully parallel jobs, where each job j has \(s_j\) units of workload, and each unit workload can be executed on any machine at any time unit. A job is considered complete when its entire workload has been executed. The objective is to find a schedule that minimizes the total weighted completion time \(\sum w_j C_j\), where \(w_j\) is the weight of job j and \(C_j\) is the completion time of job j. We provide theoretical results for this problem. First, we give a PTAS of this problem with fixed m. We then consider the special case where \(w_j = s_j\) for each job j, and we show that it is polynomial solvable with fixed m. Finally, we study the approximation ratio of a greedy algorithm, the Largest-Ratio-First algorithm. For the special case, we show that the approximation ratio depends on the instance size, i.e. n and m, while for the general case where jobs have arbitrary weights, we prove that the upper bound of the approximation ratio is \(1 + \frac{m-1}{m+2}\).  相似文献   

16.
A new approach to polynomial higher-order approximation (smoothing) based on the basic elements method (BEM) is proposed. A BEM polynomial of degree n is defined by four basic elements specified on a three-point grid: x 0 + α < x 0 < x 0 + β, αβ <0. Formulas for the calculation of coefficients of the polynomial model of order 12 were derived. These formulas depend on the interval length, continuous parameters α and β, and the values of f (m)(x 0+ν), ν = α, β, 0, m = 0,3. The application of higher-degree BEM polynomials in piecewise-polynomial approximation and smoothing improves the stability and accuracy of calculations when the grid step is increased and reduces the computational complexity of the algorithms.  相似文献   

17.
In secure delegatable computation, computationally weak devices (or clients) wish to outsource their computation and data to an untrusted server in the cloud. While most earlier work considers the general question of how to securely outsource any computation to the cloud server, we focus on concrete and important functionalities and give the first protocol for the pattern matching problem in the cloud. Loosely speaking, this problem considers a text T that is outsourced to the cloud \({\textsc {S}}\) by a sender \({\textsc {SEN}}\). In a query phase, receivers \({\textsc {REC}}_1, \ldots , {\textsc {REC}}_l\) run an efficient protocol with the server \({\textsc {S}}\) and the sender \({\textsc {SEN}}\) in order to learn the positions at which a pattern of length m matches the text (and nothing beyond that). This is called the outsourced pattern matching problem which is highly motivated in the context of delegatable computing since it offers storage alternatives for massive databases that contain confidential data (e.g., health-related data about patient history). Our constructions are simulation-based secure in the presence of semi-honest and malicious adversaries (in the random oracle model) and limit the communication in the query phase to O(m) bits plus the number of occurrences—which is optimal. In contrast to generic solutions for delegatable computation, our schemes do not rely on fully homomorphic encryption but instead use novel ideas for solving pattern matching, based on a reduction to the subset sum problem. Interestingly, we do not rely on the hardness of the problem, but rather we exploit instances that are solvable in polynomial time. A follow-up result demonstrates that the random oracle is essential in order to meet our communication bound.  相似文献   

18.
It is known that the controllable system x′ = Bx + Du, where the x is the n-dimensional vector, can be transferred from an arbitrary initial state x(0) = x 0 to an arbitrary finite state x(T) = x T by the control function u(t) in the form of the polynomial in degrees t. In this work, the minimum degree of the polynomial is revised: it is equal to 2p + 1, where the number (p ? 1) is a minimum number of matrices in the controllability matrix (Kalman criterion), whose rank is equal to n. A simpler and a more natural algorithm is obtained, which first brings to the discovery of coefficients of a certain polynomial from the system of algebraic equations with the Wronskian and then, with the aid of differentiation, to the construction of functions of state and control.  相似文献   

19.
We present a new method for clausal theorem proving, named SGGS from semantically-guided goal-sensitive reasoning. SGGS generalizes to first-order logic the conflict-driven clause learning (CDCL) procedure for propositional satisfiability. Starting from an initial interpretation, used for semantic guidance, SGGS employs a sequence of constrained clauses to represent a candidate model, instance generation to extend it, resolution and other inferences to explain and solve conflicts, amending the model. We prove that SGGS is refutationally complete and model complete in the limit, regardless of initial interpretation. SGGS is also goal sensitive, if the initial interpretation is properly chosen, and proof confluent, because it repairs the current model without undoing steps by backtracking. Thus, SGGS is a complete first-order method that is simultaneously model-based à la CDCL, semantically-guided, goal-sensitive, and proof confluent.  相似文献   

20.
In its simplest form, the longest common substring problem is to find a longest substring common to two or multiple strings. Using (generalized) suffix trees, this problem can be solved in linear time and space. A first generalization is the k -common substring problem: Given m strings of total length n, for all k with 2≤km simultaneously find a longest substring common to at least k of the strings. It is known that the k-common substring problem can also be solved in O(n) time (Hui in Proc. 3rd Annual Symposium on Combinatorial Pattern Matching, volume 644 of Lecture Notes in Computer Science, pp. 230–243, Springer, Berlin, 1992). A further generalization is the k -common repeated substring problem: Given m strings T (1),T (2),…,T (m) of total length n and m positive integers x 1,…,x m , for all k with 1≤km simultaneously find a longest string ω for which there are at least k strings \(T^{(i_{1})},T^{(i_{2})},\ldots,T^{(i_{k})}\) (1≤i 1<i 2<???<i k m) such that ω occurs at least \(x_{i_{j}}\) times in \(T^{(i_{j})}\) for each j with 1≤jk. (For x 1=???=x m =1, we have the k-common substring problem.) In this paper, we present the first O(n) time algorithm for the k-common repeated substring problem. Our solution is based on a new linear time algorithm for the k-common substring problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号