首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Topological dynamics of cellular automata (CA), inherited from classical dynamical systems theory, has been essentially studied in dimension 1. This paper focuses on higher dimensional CA and aims at showing that the situation is different and more complex starting from dimension 2. The main results are the existence of non sensitive CA without equicontinuous points, the non-recursivity of sensitivity constants, the existence of CA having only non-recursive equicontinuous points and the existence of CA having only countably many equicontinuous points. They all show a difference between dimension 1 and higher dimensions. Thanks to these new constructions, we also extend undecidability results concerning topological classification previously obtained in the 1D case. Finally, we show that the set of sensitive CA is only $\varPi _{2}^{0}$ in dimension 1, but becomes $\varSigma _{3}^{0}$ -hard for dimension 3.  相似文献   

2.
We continue the study of cellular automata (CA) directional dynamics, i.e. , the behavior of the joint action of CA and shift maps. This notion has been investigated for general CA in the case of expansive dynamics by Boyle and Lind; and by Sablik for sensitivity and equicontinuity. In this paper we give a detailed classification for the class of additive CA providing non-trivial examples for some classes of Sablik’s classification. Moreover, we extend the directional dynamics studies by considering also factor languages and attractors.  相似文献   

3.
Applying the concept of fuzzy sets to hypernear-rings, Davvaz introduced the notion of a fuzzy hyperideal as early as 1999, and it has been studied by several authors. In this paper, we introduce the notion of fuzzy hyperideals in hypernear-rings and investigate some related properties. Also, we characterize maximal fuzzy hyperideals and show that every maximal fuzzy hyperideal of a hypernear-ring is completely normal.  相似文献   

4.
Bisimilar control affine systems   总被引:2,自引:0,他引:2  
The notion of bisimulation plays a very important role in theoretical computer science where it provides several notions of equivalence between models of computation. These equivalences are in turn used to simplify verification and synthesis for these models as well as to enable compositional reasoning. In systems theory, a similar notion is also of interest in order to develop modular verification and design tools for purely continuous or hybrid control systems. In this paper, we introduce two notions of bisimulation for nonlinear systems. We present differential geometric characterizations of these notions and show that bisimilar systems of different dimensions are obtained by factoring out certain invariant distributions. Furthermore, we also show that all bisimilar systems of different dimension are of this form.  相似文献   

5.
Large complex networks occur in many applications of computer science. The complex network zeta function and the graph surface function have been used to characterize these networks and to define a dimension for complex networks. In this work we present three new results related to the complex network dimension. First, we show the relationship of the concept to Kolmogorov complexity. Second, we show how the definition of complex network dimension can be reformulated by defining the concept for a single node, and then defining the complex network dimension as the supremum over all nodes. This makes the concept work better for formally infinite graphs. Third, we study interesting parallels to zeta dimension, a notion originally from number theory which has found connections to theoretical computer science. These parallels lead to a deeper insight into the complex network dimension, e.g., the formulation in terms of the entropy and a theorem relating dimension to connectivity.  相似文献   

6.
The hitting time of a classical random walk (Markov chain) is the time required to detect the presence of—or equivalently, to find—a marked state. The hitting time of a quantum walk is subtler to define; in particular, it is unknown whether the detection and finding problems have the same time complexity. In this paper we define new Monte Carlo type classical and quantum hitting times, and we prove several relationships among these and the already existing Las Vegas type definitions. In particular, we show that for some marked state the two types of hitting time are of the same order in both the classical and the quantum case. Then, we present new quantum algorithms for the detection and finding problems. The complexities of both algorithms are related to the new, potentially smaller, quantum hitting times. The detection algorithm is based on phase estimation and is particularly simple. The finding algorithm combines a similar phase estimation based procedure with ideas of Tulsi from his recent theorem (Tulsi A.: Phys. Rev. A 78:012310 2008) for the 2D grid. Extending his result, we show that we can find a unique marked element with constant probability and with the same complexity as detection for a large class of quantum walks—the quantum analogue of state-transitive reversible ergodic Markov chains. Further, we prove that for any reversible ergodic Markov chain P, the quantum hitting time of the quantum analogue of P has the same order as the square root of the classical hitting time of P. We also investigate the (im)possibility of achieving a gap greater than quadratic using an alternative quantum walk. In doing so, we define a notion of reversibility for a broad class of quantum walks and show how to derive from any such quantum walk a classical analogue. For the special case of quantum walks built on reflections, we show that the hitting time of the classical analogue is exactly the square of the quantum walk.  相似文献   

7.
M Kearns  D Ron 《Neural computation》1999,11(6):1427-1453
In this article we prove sanity-check bounds for the error of the leave-one-out cross-validation estimate of the generalization error: that is, bounds showing that the worst-case error of this estimate is not much worse than that of the training error estimate. The name sanity check refers to the fact that although we often expect the leave-one-out estimate to perform considerably better than the training error estimate, we are here only seeking assurance that its performance will not be considerably worse. Perhaps surprisingly, such assurance has been given only for limited cases in the prior literature on cross-validation. Any nontrivial bound on the error of leave-one-out must rely on some notion of algorithmic stability. Previous bounds relied on the rather strong notion of hypothesis stability, whose application was primarily limited to nearest-neighbor and other local algorithms. Here we introduce the new and weaker notion of error stability and apply it to obtain sanity-check bounds for leave-one-out for other classes of learning algorithms, including training error minimization procedures and Bayesian algorithms. We also provide lower bounds demonstrating the necessity of some form of error stability for proving bounds on the error of the leave-one-out estimate, and the fact that for training error minimization algorithms, in the worst case such bounds must still depend on the Vapnik-Chervonenkis dimension of the hypothesis class.  相似文献   

8.
Linear discriminant analysis (LDA) is one of the most popular dimension reduction methods and has been widely used in many applications. In the last decades many LDA-based dimension reduction algorithms have been reported. Among these methods, orthogonal LDA (OLDA) is a famous one and several different implementations of OLDA have been proposed. In this paper, we propose a new and fast implementation of OLDA. Compared with the other OLDA implementations, our proposed implementation of OLDA is the fastest one when the dimensionality d is larger than the sample size n. Then, based on our proposed implementation of OLDA, we present an incremental OLDA algorithm which can accurately update the projection matrix of OLDA when new samples are added into the training set. The effectiveness of our proposed new OLDA algorithm and its incremental version are demonstrated by some real-world data sets.  相似文献   

9.
One of the key features of logic programming is the notion of goal-directed provability. In intuitionistic logic, the notion of uniform proof has been used as a proof-theoretic characterization of this property. Whilst the connections between intuitionistic logic and computation are well known, there is no reason per se why a similar notion cannot be given in classical logic. In this paper we show that there are two notions of goal-directed proof in classical logic, both of which are suitably weaker than that for intuitionistic logic. We show the completeness of this class of proofs for certain fragments, which thus form logic programming languages. As there are more possible variations on the notion of goal-directed provability in classical logic, there is a greater diversity of classical logic programming languages than intuitionistic ones. In particular, we show how logic programs may contain disjunctions in this setting. This provides a proof-theoretic basis for disjunctive logic programs, as well as characterising the “disjunctive” nature of answer substitutions for such programs in terms of the provability properties of the classical connectives Λ and Λ.  相似文献   

10.
In real world applications, information is often provided by multiple sources having different priority levels reflecting for instance their reliability. This paper investigates ”Prioritized Removed Sets Revision” (PRSR) for revising stratified DL-Lite knowledge bases when a new sure piece of information, called the input, is added. The strategy of revision is based on inconsistency minimization and consists in determining smallest subsets of assertions (prioritized removed sets) that should be dropped from the current stratified knowledge base in order to restore consistency and accept the input. We consider different forms of input: A membership assertion, a positive or a negative inclusion axiom. To characterize our revision approach, we first rephrase Hansson’s postulates for belief bases revision within a DL-Lite setting, we then give logical properties of PRSR operators. In some situations, the revision process leads to several possible revised knowledge bases where defining a selection function is required to keep results within DL-Lite fragment. The last part of the paper shows how to use the notion of hitting set in order to compute the PRSR outcome. We also study the complexity of PRSR operators, and show that, in some cases, the computational complexity of the result can be performed in polynomial time.  相似文献   

11.
Non-destructive testing(NDT) has been widely used in many fields, we can easily see it is being used in shipbuilding, aerospace, weapons manufacturing and so on. ICT is one of the best non destructive testing methods currently, but it has not been used widely, because it requires much compute and costs much time. Defect location is one of the most important processing steps in the digital image analysis. Defect location would correspondingly reduce the time spent in the testing. We may only require locating the defects in some case. So, we divide the CT images into several little blocks which the square is equal, and then calculate the fractal on each block. By determining the value and connected region number of the fractal dimension, we can locate defects of the image. The results show that the block fractal dimension is a useful and time-saving defect location method.  相似文献   

12.
In the proof-theoretic study of logic, the notion of normal proof has been understood and investigated as a metalogical property. Usually we formulate a system of logic, identify a class of proofs as normal proofs, and show that every proof in the system reduces to a corresponding normal proof. This paper develops a system of modal logic that is capable of expressing the notion of normal proof within the system itself, thereby making normal proofs an inherent property of the logic. Using a modality △ to express the existence of a normal proof, the system provides a means for both recognizing and manipulating its own normal proofs. We develop the system as a sequent calculus with the implication connective ⊃ and the modality △, and prove the cut elimination theorem. From the sequent calculus, we derive two equivalent natural deduction systems.  相似文献   

13.
This paper studies the problems of stability analysis of Takagi-Sugeno free fuzzy systems with time-varying uncertainties. In our prior study, we represented the time-varying uncertainty incurred in characteristic interval matrices in terms of the stability of Takagi-Sugeno free fuzzy systems with consequent parameter uncertainties. Based on Mayer's convergent theorem for powers of single interval matrix and its generalization, we further proposed some sufficient conditions for the Takagi-Sugeno free fuzzy system with time-varying uncertainties to be globally asymptotically stable. In this paper, we propose the notion of simultaneously nilpotent interval matrices to investigate the Takagi-Sugeno free fuzzy system with time-varying uncertainties to be strongly stable within steps, where relates to the dimension of interval matrices. Moreover, a unique situation for the deterministic Takagi-Sugeno free fuzzy system to be strongly stable within steps is derived as well, where relates to the dimension of characteristic matrices for the deterministic Takagi-Sugeno free fuzzy system.  相似文献   

14.
The implication of multivalued dependencies in relational databases has originally been defined in the context of some fixed finite universe. While axiomatisability and implication problems have been intensely studied with respect to this notion almost no research has been devoted towards the alternative notion of implication in which the underlying universe of attributes is left undetermined. Based on a set of common inference rules we establish all axiomatisations in undetermined universes, and all axiomatisations in fixed universes that indicate the role of the complementation rule as a means of database normalisation. This characterises the expressiveness of several incomplete sets of inference rules. We also establish relationships between axiomatisations in fixed and undetermined universes, and study the time complexity of the implication problem in undetermined universes. The results of this paper establish a foundation for reasoning about multivalued dependencies without the assumption of a fixed underlying universe.  相似文献   

15.
The notion of context appears in computer science, as well as in several other disciplines, in various forms. In this paper, we present a general framework for representing the notion of context in information modeling. First, we define a context as a set of objects, within which each object has a set of names and possibly a reference: the reference of the object is another context which “hides” detailed information about the object. Then, we introduce the possibility of structuring the contents of a context through the traditional abstraction mechanisms, i.e., classification, generalization, and attribution. We show that, depending on the application, our notion of context can be used as an independent abstraction mechanism, either in an alternative or a complementary capacity with respect to the traditional abstraction mechanisms. We also study the interactions between contextualization and the traditional abstraction mechanisms, as well as the constraints that govern such interactions. Finally, we present a theory for contextualized information bases. The theory includes a set of validity constraints, a model theory, as well as a set of sound and complete inference rules. We show that our core theory can be easily extended to support embedding of particular information models in our contextualization framework.  相似文献   

16.
Problem structuring has often been seen as providing an appropriate framework for Community OR; from our point of view the question to raise is not how to choose the best method for any particular situation, but how to act on opportunities to use different methods in practice, i.e. making the process the central focus. In this paper we examine our use of problem-structuring methods in the context provided by postmodernist poststructuralist ideas in some Community OR (COR) case studies. In particular this paper will:
  • • scrutinise the notion of 'problem' and suggest that it might open a larger space for action and choice to recast this in terms of issues;

  • • show the applicability of issue-structuring methods in combination with other participatory methods, e.g. rapid appraisal and action methods, especially in the context of evaluation, which most of our case studies will be concerned with;

  • • illustrate the possibilities of mixing and matching different parts of different issue structuring methods in practice.


In this way we demonstrate some ways in which the practice of OR can be revitalised, in particular through a re-conceptualisation of the notion of 'praxis'.  相似文献   

17.
《国际通用系统杂志》2012,41(6):539-554
We survey results on decidability questions concerning cellular automata. Properties discussed include reversibility and surjectivity and their variants, time-symmetry and conservation laws, nilpotency and other properties of the limit set and the trace, properties chaoticity related such as sensitivity to initial conditions and mixing of the space, and dynamics from finite initial configurations. We also discuss briefly the tiling problem and its variants, and consider the influence of the dimension of the space on the decidability status of the questions.  相似文献   

18.
Set systems of finite VC dimension are frequently used in applications relating to machine learning theory and statistics. Two simple types of VC classes which have been widely studied are the maximum classes (those which are extremal with respect to Sauer's lemma) and so-called Dudley classes, which arise as sets of positivity for linearly parameterized functions. These two types of VC class were related by Floyd, who gave sufficient conditions for when a Dudley class is maximum. It is widely known that Floyd's condition applies to positive Euclidean half-spaces and certain other classes, such as sets of positivity for univariate polynomials.In this paper we show that Floyd's lemma applies to a wider class of linearly parameterized functions than has been formally recognized to date. In particular we show that, modulo some minor technicalities, the sets of positivity for any linear combination of real analytic functions is maximum on points in general position. This includes sets of positivity for multivariate polynomials as a special case.  相似文献   

19.
Metaquery (metapattern) is a data mining tool which is useful for learning rules involving more than one relation in the database. The notion of a metaquery has been proposed as a template or a second-order proposition in a language that describes the type of pattern to be discovered. This tool has already been successfully applied to several real-world applications.In this paper we advance the state of the art in metaquery research in several ways. First, we argue that the notion of a support value for metaqueries, where a support value is intuitively some indication to the relevance of the rules to be discovered, is not adequately defined in the literature, and, hence, propose our own definition. Second, we analyze some of the related computational problems, classify them as NP-hard and point out some tractable cases. Third, we propose some efficient algorithms for computing support and present preliminary experimental results that indicate the usefulness of our algorithms.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号