首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Finite binary sequences are analysed according to their content of homogeneity and symmetry. For this purpose two series of measures are defined, using cyclic transformations and their inversions. The homogeneity and symmetry measures were found to be equal for special cases. The invariance properties of the different measures were analysed and related to the theory of symmetry groups of dieders. Further, the relationships of the measures to the theory of finite Fourier analysis, correlation theory, and the entropy concept of information theory are outlined. For the concatenation of two sequences X and Y, “superadditivity”, i.e. H(XY) ? H(X) + H(Y), holds for homogeneity and symmetry in most cases, and for a modified definition of homogeneity in every case—just complementary to the subadditivity of various complexity measures known from the literature.  相似文献   

2.
This paper is a contribution to the theory of grammatical complexity, in particular to the following basic question: consider some language L and grammars of some type X; what is the smallest number of productions of a type X grammar required to generate L? This complexity measure, the so-called X complexity of L, has been investigated before. We study the more basic case when the languages L considered are finite (a case which has been neglected, so far). We obtain a number of results and insights suggesting that such study is of importance. In addition to a number of ‘expected’ (but not necessarily easy to prove) results that with type X grammars more productions are necessary than with some other type Y grammars (even if types X and Y define the same family of languages) we show that if the limit of certain sequences of finite languages is of type X1 then the type X complexity of each of the finite languages involved must be low.  相似文献   

3.
In a graph G=(V,E), a bisection (X,Y) is a partition of V into sets X and Y such that |X|?|Y|?|X|+1. The size of (X,Y) is the number of edges between X and Y. In the Max Bisection problem we are given a graph G=(V,E) and are required to find a bisection of maximum size. It is not hard to see that ⌈|E|/2⌉ is a tight lower bound on the maximum size of a bisection of G.We study parameterized complexity of the following parameterized problem called Max Bisection above Tight Lower Bound (Max-Bisec-ATLB): decide whether a graph G=(V,E) has a bisection of size at least ⌈|E|/2⌉+k, where k is the parameter. We show that this parameterized problem has a kernel with O(k2) vertices and O(k3) edges, i.e., every instance of Max-Bisec-ATLB is equivalent to an instance of Max-Bisec-ATLB on a graph with at most O(k2) vertices and O(k3) edges.  相似文献   

4.
Given a set X of sequences over a finite alphabet, we investigate the following three quantities.
(i)
The feasible predictability of X is the highest success ratio that a polynomial-time randomized predictor can achieve on all sequences in X.
(ii)
The deterministic feasible predictability of X is the highest success ratio that a polynomial-time deterministic predictor can achieve on all sequences in X.
(iii)
The feasible dimension of X is the polynomial-time effectivization of the classical Hausdorff dimension (“fractal dimension”) of X.
Predictability is known to be stable in the sense that the feasible predictability of XY is always the minimum of the feasible predictabilities of X and Y. We show that deterministic predictability also has this property if X and Y are computably presentable. We show that deterministic predictability coincides with predictability on singleton sets. Our main theorem states that the feasible dimension of X is bounded above by the maximum entropy of the predictability of X and bounded below by the segmented self-information of the predictability of X, and that these bounds are tight.  相似文献   

5.
The article addresses the problem of reasoning under time constraints with incomplete, vague, and uncertain information. It is based on the idea of Variable Precision Logic (VPL), introduced by Michalski and Winston, which deals with both the problem of reasoning with incomplete information subject to time constraints and the problem of reasoning efficiently with exceptions. It offers mechanisms for handling trade-offs between the precision of inferences and the computational efficiency of deriving them. As an extension of Censored Production Rules (CPRs) that exhibit variable precision in which certainty varies while specificity stays constant, a Hierarchical Censored Production Rules (HCPRs) system of Knowledge Representation proposed by Bharadwaj and Jain exhibits both variable certainty as well as variable specificity. Fuzzy Censored Production Rules (FCPRs) are obtained by augmenting ordinary fuzzy conditional statement: “if X is A then Y is B” (or A(x)B(y) for short) with an exception condition and are written in the form: “if X is A then Y is B unless Z is C” (or A(x) ⇒ B(y) ∥ C(z)). Such rules are employed in situations in which the fuzzy conditional statement “if X is A then Y is B” holds frequently and the exception condition “Z is C” holds rarely. Thus, using a rule of this type we are free to ignore the exception condition, when the resources needed to establish its presence are tight or there simply is no information available as to whether it holds or does not hold. Thus if … then part of the FCPR expresses important information while the unless part acts only as a switch that changes the polarity of “Y is B” to “Y is not B” when the assertion “Z is C” holds. Our aim is to show how an ordinary fuzzy production rule on suitable modifications and augmentation with relevant information becomes a Fuzzy Hierarchical Censored Production Rules (FHCPRs), which in turn enables to resolve many of the problems associated with usual fuzzy production rules system. Examples are given to demonstrate the behavior of the proposed schemes. © 1996 John Wiley & Sons, Inc.  相似文献   

6.
Some upper and lower bounds are obtained for the maximum of the absolute value of the difference between the mutual information |I(X; Y) ? I(X′; Y′)| of two pairs of discrete random variables (X, Y) and (X′, Y′) via the variational distance between the probability distributions of these pairs. In particular, the upper bound obtained here substantially generalizes and improves the upper bound of [1]. In some special cases, our upper and lower bounds coincide or are rather close. It is also proved that the lower bound is asymptotically tight in the case where the variational distance between (X, Y) and (XY′) tends to zero.  相似文献   

7.
Consideration was given to a specific family of bipartite graphs consisting of two disjoint subsets X and Y of vertices and characterized by that each vertex in X (Y) is connected to each of the remaining vertices in X (Y) by a unique path of length two passing through some vertex in Y (X). The prefix “quasi” reflects the fact that complete connection of the vertices is realized by paths of length two rather than by edges. The problem of constructing uniform minimal graphs with identical cardinalities of the subsets X and Y which is of practical interest for complex communication networks was discussed. It belongs to the class of combinatorial problems of construction of the so-called symmetrical block designs.  相似文献   

8.
Let K be a number field and F(X,Y) an absolutely irreducible polynomial of K[X,Y] such that the algebraic curve defined by the equation F(X,Y)=0 is rational. In this paper we present practical algorithms for the determination of all solutions of the Diophantine equation F(X,Y)=0 in algebraic integers of K.  相似文献   

9.
Let f(X, Y) be an absolutely irreducible polynomial with integer coefficients such that the curve defined by the equation f(X, Y)  =  0 is of genus 0 having at least three infinite valuations. This paper describes a practical general method for the explicit determination of all integer solutions of the diophantine equation f(X, Y)  =  0. Some elaborated examples are given.  相似文献   

10.
The edit distance between given two strings X and Y is the minimum number of edit operations that transform X into Y without performing multiple operations that involve the same position. Ordinarily, string editing is based on character insert, delete, and substitute operations. Motivated from the facts that substring reversals are observed in genomic sequences, and it is not always possible to transform a given sequence X into a given sequence Y by reversals alone (e.g., X is all 0's, and Y is all 1's), Muthukrishnan and Sahinalp [S. Muthukrishnan, S.C. Sahinalp, Approximate nearest neighbors and sequence comparison with block operations, in: Proc. ACM Symposium on Theory of Computing (STOC), 2000, pp. 416-424; S. Muthukrishnan, S.C. Sahinalp, An improved algorithm for sequence comparison with block reversals, Theoretical Computer Science 321 (1) (2004) 95-101] considered a “simple” well-defined edit distance model in which the edit operations are: replace a character, and reverse and replace a substring. A substring of X can only be reversed if the reversal results in a match in the same position in Y. The cost of each character replacement and substring reversal is 1. The distance in this case is defined only when |X|=|Y|=n. There is an algorithm for computing the distance in this model with worst-case time complexity O(nlog2n) [S. Muthukrishnan, S.C. Sahinalp, An improved algorithm for sequence comparison with block reversals, Theoretical Computer Science 321 (1) (2004) 95-101]. We present a dynamic programming algorithm with worst-case time complexity O(n2) but its expected running-time is O(n). In our dynamic programming solution the weights of edit operations can vary for different substitutions, and the costs of reversals can be a function of the reversal-length.  相似文献   

11.
In this paper we study diagonal processes over time bounded computations of one-tape Turing machines by diagonalizing only over those machines for which there exist formal proofs that they operate in the given time bound. This replaces the traditional “clock” in resource bounded diagonalization by formal proofs about running times and establishes close relations between properties of proof systems and existence of sharp time bounds for one-tape Turing machine complexity classes. These diagonalization methods also show that the Gap Theorem for resource bounded computations can hold only for those complexity classes which differ from the corresponding provable complexity classes. Furthermore, we show that there exist recursive time bounds T(n) such that the class of languages for which we can formally prove the existence of Turing machines which accept them in time T(n) differs from the class of languages accepted by Turing machines for which we can prove formally that they run in time T(n). We also investigate the corresponding problems for tape bound computations and discuss the difference time and tapebounded computations.  相似文献   

12.
This paper deals with the problem of estimating a transmitted string X * by processing the corresponding string Y, which is a noisy version of X *. We assume that Y contains substitution, insertion, and deletion errors, and that X * is an element of a finite (but possibly, large) dictionary, H. The best estimate X + of X *, is defined as that element of H which minimizes the generalized Levenshtein distance D(X, Y) between X and Y such that the total number of errors is not more than K, for all XH. The trie is a data structure that offers search costs that are independent of the document size. Tries also combine prefixes together, and so by using tries in approximate string matching we can utilize the information obtained in the process of evaluating any one D(X i , Y), to compute any other D(X j , Y), where X i and X j share a common prefix. In the artificial intelligence (AI) domain, branch and bound (BB) schemes are used when we want to prune paths that have costs above a certain threshold. These techniques have been applied to prune, for example, game trees. In this paper, we present a new BB pruning strategy that can be applied to dictionary-based approximate string matching when the dictionary is stored as a trie. The new strategy attempts to look ahead at each node, c, before moving further, by merely evaluating a certain local criterion at c. The search algorithm according to this pruning strategy will not traverse inside the subtrie(c) unless there is a “hope” of determining a suitable string in it. In other words, as opposed to the reported trie-based methods (Kashyap and Oommen in Inf Sci 23(2):123–142, 1981; Shang and Merrettal in IEEE Trans Knowledge Data Eng 8(4):540–547, 1996), the pruning is done a priori before even embarking on the edit distance computations. The new strategy depends highly on the variance of the lengths of the strings in H. It combines the advantages of partitioning the dictionary according to the string lengths, and the advantages gleaned by representing H using the trie data structure. The results demonstrate a marked improvement (up to 30% when costs are of a 0/1 form, and up to 47% when costs are general) with respect to the number of operations needed on three benchmark dictionaries.  相似文献   

13.
Let f(X, Y) be an absolutely irreducible polynomial with integer coefficients such that the curve defined by the equation f(X, Y)  =  0 is of genus 0 having at most two infinite valuations. This paper describes a practical general method for the explicit determination of all integer solutions of the diophantine equation f(X, Y)  =  0. Several elaborated examples are given. Furthermore, a necessary and sufficient condition for a curve of genus 0 to have infinitely many integer points is obtained.  相似文献   

14.
One-state deterministic top-down tree transducers (or, tree homomorphisms) cannot handle “prime copying,” i.e., their class of output (string) languages is not closed under the operation L → {$(w$)f(n)w?L, f(n) ? 1}, where f is any integer function whose range contains numbers with arbitrarily large prime factors (such as a polynomial). The exact amount of nonclosure under these copying operations is established for several classes of input (tree) languages. These results are relevant to the extended definable (or, restricted parallel level) languages, to the syntax-directed translation of context-free languages, and to the tree transducer hierarchy.  相似文献   

15.
The center of a language has been defined in [7, 8, 9] as the set of all words which have infinite right completions in the language. In this paper we extend this notion by taking into account also left and two-sided completions. Thus, for any language X, we consider the left center Cl(X), the right center Cr(X) and two different bilateral centers C1(X) and C2(X). Some properties of these centers are derived. In particular the main results of the paper give some general conditions under which C1(X)=C2(X)and Cr(Cl(X))=Cl(Cr(X)). These conditions deal with ‘strong’ and ‘weak’ iteration properties and ‘periodicity’ of a language.  相似文献   

16.
Suppose the random vector (X,Y) satisfies the regression model Y=m(X)+σ(X)ε, where m(⋅) is the conditional mean, σ2(⋅) is the conditional variance, and ε is independent of X. The covariate X is d-dimensional (d≥1), the response Y is one-dimensional, and m and σ are unknown but smooth functions. Goodness-of-fit tests for the parametric form of the error distribution are studied under this model, without assuming any parametric form for m or σ. The proposed tests are based on the difference between a nonparametric estimator of the error distribution and an estimator obtained under the null hypothesis of a parametric model. The large sample properties of the proposed test statistics are obtained, as well as those of the estimator of the parameter vector under the null hypothesis. Finally, the finite sample behavior of the proposed statistics, and the selection of the bandwidths for estimating m and σ are extensively studied via simulations.  相似文献   

17.
Let X(t) denote the remaining useful lifetime of a machine, and Y(t) be a standard Brownian motion. Assume that the derivative ρ[X(t),?Y(t)] of X(t) is a deterministic function of (at least) Y(t). We consider the two-dimensional degenerate diffusion process (X(t),?Y(t)). We obtain explicit expressions for the expected value of the random variable T(x,?y) denoting the first time the machine must be replaced, or repaired, for various functions ρ[X(t),?Y(t)].  相似文献   

18.
Counting the number of distinct factors in the words of a language gives a measure of complexity for that language similar to the factor-complexity of infinite words. Similarly as for infinite words, we prove that this complexity function f(n) is either bounded or f(n)?n+1. We call languages with bounded complexity periodic and languages with complexity f(n)=n+1Sturmian. We describe the structure of periodic languages and characterize the Sturmian languages as the sets of factors of (one- or two-way) infinite Sturmian words.  相似文献   

19.
In this paper, we consider a generalized longest common subsequence problem, the string-excluding constrained LCS problem. For the two input sequences X and Y of lengths n and m, and a constraint string P of length r, the problem is to find a common subsequence Z of X and Y excluding P as a substring and the length of Z is maximized. The problem and its solution were first proposed by Chen and Chao (2011) [1], but we found that their algorithm cannot solve the problem correctly. A new dynamic programming solution for the STR-EC-LCS problem is then presented in this paper. The correctness of the new algorithm is proved. The time complexity of the new algorithm is O(nmr).  相似文献   

20.
Due to a large number of applications, bicliques of graphs have been widely considered in the literature. This paper focuses on non-induced bicliques. Given a graph G=(V,E) on n vertices, a pair (X,Y), with X,YV, XY=∅, is a non-induced biclique if {x,y}∈E for any xX and yY. The NP-complete problem of finding a non-induced (k1,k2)-biclique asks to decide whether G contains a non-induced biclique (X,Y) such that |X|=k1 and |Y|=k2. In this paper, we design a polynomial-space O(n1.6914)-time algorithm for this problem. It is based on an algorithm for bipartite graphs that runs in time O(n1.30052). In deriving this algorithm, we also exhibit a relation to the spare allocation problem known from memory chip fabrication. As a byproduct, we show that the constraint bipartite vertex cover problem can be solved in time O(n1.30052).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号