首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Xue  Yige  Deng  Yong 《Applied Intelligence》2022,52(7):7818-7831

Möbius transformation is a very important information inversion tool. Möbius transformation is sought after by many experts and scholars at home and abroad, and is a hot research topic at present. Möbius transformation can use the known information to reverse the unknown information, indicating that it has a strong ability to process information. Generalized evidence theory is an extension of classical evidence theory. When belief degree of the null subset is 0, then the generalized evidence theory will be degenerated as classical Dempster-Shafer evidence theory. However, how to apply Möbius transformation to generalized evidence theory is still an open problem. This paper proposes Möbius transformation in generalized evidence theory, which can perform function inversion of generalized evidence theory effectively. Numerical examples are used to prove the validity of Möbius transformation in generalized evidence theory. The experimental results show that the Möbius transformation in generalized evidence theory can effectively invert the generalized evidence theory and is a very effective function inversion method.

  相似文献   

2.
A new axiomatic system OST of operational set theory is introduced in which the usual language of set theory is expanded to allow us to talk about (possibly partial) operations applicable both to sets and to operations. OST is equivalent in strength to admissible set theory, and a natural extension of OST is equivalent in strength to ZFC. The language of OST provides a framework in which to express “small” large cardinal notions—such as those of being an inaccessible cardinal, a Mahlo cardinal, and a weakly compact cardinal—in terms of operational closure conditions that specialize to the analogue notions on admissible sets. This illustrates a wider program whose aim is to provide a common framework for analogues of large cardinal notions that have appeared in admissible set theory, admissible recursion theory, constructive set theory, constructive type theory, explicit mathematics, and systems of recursive ordinal notations that have been used in proof theory.  相似文献   

3.
We present an interpreation of a constructive domain theory in Martin-Löf's type theory. More specifically, we construct a well-pointed Cartesian closed category of semilattices and approximable mappings. This construction is completely formalized and checked using the interactive proof assistant ALF. We base our work on Martin-Löf's domain interpretation of the theory of expressions underlying type theory. But our emphasis is different from Martin-Löf's, who interprets the program forms of type theory and proves a correspondence between their denotational and operational semantics. We instead show that a theory of domains can be developed within a well-defined fragment of (total) type theory. This is an important step toward constructing a model of all of partial type theory (type theory extended with general recursion) inside total type theory.  相似文献   

4.
ABSTRACT

This article analyzes how the battle between computer security experts and cyberterrorists can be explained through game theory. This article is important because it not only applies game theory to the study of cyberterrorism, which has been rarely done so far, but it also breaks new ground by intersecting the game theoretical model with another theory, social network theory. An important thesis of this analysis is that under the principles of game theory, each player is assumed to be rational; all players wish the outcome to be as positive or rewarding as possible. Another key argument is that game theory is a postmodern theory; against opponents who wage attacks in a postmodern fashion, conventional strategies lead nowhere. The cyberterrorist and the cyber forensics expert not only engage in real-time game play but also use tactics that are not conceivable in conventional conflict.  相似文献   

5.
The Nelson-Oppen combination method combines decision procedures for first-order theories over disjoint signatures into a single decision procedure for the union theory. To be correct, the method requires that the component theories be stably infinite. This restriction makes the method inapplicable to many interesting theories such as, for instance, theories having only finite models.In this paper we provide a new combination method that can combine any theory that is not stably infinite with another theory, provided that the latter is what we call a shiny theory. Examples of shiny theories include the theory of equality, the theory of partial orders, and the theory of total orders.An interesting consequence of our results is that any decision procedure for the satisfiability of quantifier-free Σ-formulae in a Σ-theory T can always be extended to accept inputs over an arbitrary signature Ω Σ.  相似文献   

6.
Knowledge acquisition with machine learning techniques is a fundamental requirement for knowledge discovery from databases and data mining systems. Two techniques in particular — inductive learning and theory revision — have been used toward this end. A method that combines both approaches to effectively acquire theories (regularity) from a set of training examples is presented. Inductive learning is used to acquire new regularity from the training examples; and theory revision is used to improve an initial theory. In addition, a theory preference criterion that is a combination of the MDL-based heuristic and the Laplace estimate has been successfully employed in the selection of the promising theory. The resulting algorithm developed by integrating inductive learning and theory revision and using the criterion has the ability to deal with complex problems, obtaining useful theories in terms of its predictive accuracy.  相似文献   

7.
Abstract

Gnostical theory (GT) is a new approach to the processing of data influenced by uncertainty. For GT, as for any theory of uncertainty, the problem of determining the types of uncertain data to which the theory can be successfully applied is of primary interest.

Problems regarding the representation and/or modelling of data within the gnostical theory of uncertain data are the main topics of this paper.

A full characterization of models of data satisfying the first axiom of GT is given. Some other important models of data used in GT are introduced and analyzed.

Measurement theory, in particular the theory of additive relational structures, is used as a tool.  相似文献   

8.
Some aspects of the physical nature of language are discussed. In particular, physical models of language must exist that are efficiently implementable. The existence requirement is essential because without physical models no communication or thinking would be possible. Efficient implementability for creating and reading language expressions is discussed and illustrated with a quantum mechanical model. The reason for interest in language is that language expressions can have meaning, either as an informal language or as a formal language associated with mathematical or physical theories. It is noted that any universally applicable physical theory, or coherent theory of physics and mathematics together, includes in its domain physical models of expressions for both the informal language used to discuss the theory and the expressions of the theory itself. It follows that there must be some formulas in the formal theory that express some of their own physical properties. The inclusion of intelligent systems in the domain of the theory means that the theory, e.g., quantum mechanics, must describe, in some sense, its own validation. Maps of language expressions into physical states are discussed. A spin projection example is discussed as are conditions under which such a map is a Gödel map. The possibility that language is also mathematical is very briefly discussed. PACS: 03.67–a; 03.65.Ta; 03.67.Lx  相似文献   

9.
Goldsmith  Judy  Sloan  Robert H.  Turán  György 《Machine Learning》2002,47(2-3):257-295
The theory revision, or concept revision, problem is to correct a given, roughly correct concept. This problem is considered here in the model of learning with equivalence and membership queries. A revision algorithm is considered efficient if the number of queries it makes is polynomial in the revision distance between the initial theory and the target theory, and polylogarithmic in the number of variables and the size of the initial theory. The revision distance is the minimal number of syntactic revision operations, such as the deletion or addition of literals, needed to obtain the target theory from the initial theory. Efficient revision algorithms are given for three classes of disjunctive normal form expressions: monotone k-DNF, monotone m-term DNF and unate two-term DNF. A negative result shows that some monotone DNF formulas are hard to revise.  相似文献   

10.
A hybrid uncertainty theory is developed to bridge the gap between fuzzy set theory and Dempster-Shafer theory. Its basis is the Dempster-Shafer formalism, which is extended to include a complete set of basic operations for manipulating uncertainties in a set-theoretic framework. The new theory, operator-belief theory (OT), retains the probabilistic flavor of Dempster's original point-to-set mappings but includes the potential for defining a wider range of operators like those found in fuzzy set theory.

The basic operations defined for OT in this paper include those for: dominance and order, union, intersection, complement and general mappings. Several sample problems in approximate reasoning are worked out to illustrate the new approach as well as to compare it with the other theories currently being used. A general method or extending the theory by using fuzzy set theory as a guide is suggested.  相似文献   


11.
Cultural theory is a natural science, with properties similar to other natural sciences; it observes natural events and creates testable, and tested, theories. Mathematical anthropology is the body of mathematics used to construct theories of culture. A mathematical theory of certain key features of culture now exists. This theory is predictive and verifiable, much in the same way that physical theory is predictive and verifiable by comparison to independent observations. The mathematical theory of culture has been successfully applied to interpret cultural structural changes over time, computing the population changes associated with structural changes over time and interpreting specific events. These facts therefore imply that a more general theory, not only of culture but of history, is possible and show how it can be constructed.  相似文献   

12.
This paper presents a grounded theory of the flow experiences of Web users engaged in information-seeking activities. The term flow refers to a state of consciousness that is sometimes experienced by individuals who are deeply involved in an enjoyable activity. The experience is characterized by some common elements: a balance between the challenges of an activity and the skills required to meet those challenges; clear goals and feedback; concentration on the task at hand; a sense of control; a merging of action and awareness; a loss of self-consciousness; a distorted sense of time; and the autotelic experience.The grounded theory research method that was employed in this study is a primarily inductive investigative process in which the researcher formulates a theory about a phenomenon by systematically gathering and analysing relevant data. The aim of this research method is building theory, not testing theory. The data that was gathered for this study primarily consisted of semi-structured in-depth interviews with informants of varying gender, age, educational attainments, occupations and Web experience who could recall experiencing flow while using the Web.  相似文献   

13.
当前,世界各主要大国都把人工智能作为它们的国家战略。人工智能的发展正在快速改变着人类的生活方式和思想观念。在中国,有一小批研究者20多年来一直在基于辩证唯物主义潜心研究具有普适性的人工智能基础理论,包括智能的形成机制、逻辑基础、数学基础、协调机理、矛盾转化等。终于,他们各自建立了机制主义人工智能理论、泛逻辑学理论、因素空间理论、协调学、可拓学、集对分析等。其中,机制主义人工智能理论是基于智能形成机制的通用理论,它能把现有的结构主义、功能主义和行为主义三大流派有机地统一起来,使意识、情感、理智成为三位一体的关系;因素空间理论是机制主义人工智能理论的数学基础;泛逻辑学理论是机制主义人工智能理论的逻辑基础。本文介绍了泛逻辑学理论的基本思想、理论基础和应用方法,阐明它的理论意义和应用价值。特别需要指出的是,在广义概率论基础上建立的命题泛逻辑(包括刚性逻辑和柔性逻辑),可看成一个完整的命题级智能信息处理算子库,库中完整地包含了全部18种柔性信息处理模式(包括16种布尔信息处理模式),可用类型编码<a,b,e>来严格区分,用它可寻找到适合自己的信息处理算子完整簇来使用。在每一个信息处理模式中,各种不确定性的组合状态由不确定性程度属性编码<k,h,β,e>来严格区分,用它可在本信息处理模式的算子完整簇中精确选择具体的算子来使用。这表明柔性信息处理本质上是一把密码锁,它需要专门的密码<a,b,e>+<k,h,β,e>才能正常打开,不能乱点鸳鸯谱。通过只有18种模式,每种模式可以从最大算子连续变化到最小算子,已经证明了没有一个命题算子被遗漏。  相似文献   

14.
The specification and management of requirements is widely considered to be one of the most important yet most problematic activities in software engineering. In some applications, such as in safety critical areas or knowledge-based systems, the construction of a requirements domain theory is regarded as an important part of this activity. Building and maintaining such a domain theory, however, requires a large investment and a range of powerful validation and maintenance tools. The area of theory refinement is concerned with the use of training data to automatically change an existing theory so that it better fits the data. Theory refinement techniques, however, have not been extensively used in applications because of the problems in scaling up their underlying algorithms.In our work we have applied theory refinement to assist in the problem of validation and maintenance of a requirements theory concerning separation standards in the North East Atlantic. In this paper we describe an implemented refinement algorithm, which processes a logic program automatically generated from the theory. We overcame the size and expressiveness problems typically encountered when applying theory refinement to a logic program of this kind by designing focused, composite refinement operators within the algorithm. These operators modify the auto-generated logic program by generalising or specialising clauses containing ordinal relations—that is relations which operate on totally ordered data.  相似文献   

15.
By its very nature, artificial intelligence is concerned with investigating topics that are ill-defined and ill-understood. This paper describes two approaches to expanding a good but incomplete theory of a domain. The first uses the domain theory as far as possible and fills in specific gaps in the reasoning process, generalizing the suggested missing steps and adding them to the domain theory. The second takes existing operators of the domain theory and applies perturbations to form new plausible operators for the theory. The specific domain to which these techniques have been applied is high-school algebra problems. The domain theory is represented as operators corresponding to algebraic manipulations, and the problem of expanding the domain theory becomes one of discovering new algebraic operators. The general framework used is one of generate and test—generating new operators for the domain and using tests to filter out unreasonable ones. The paper compares two algorithms, INFER* and MALGEN, examining their performance on actual data collected in two Scottish schools and concluding with a critical discussion of the two methods.  相似文献   

16.
Abstract

This paper presents a theory of error in cross-validation testing of algorithms for predicting real-valued attributes. The theory justifies the claim that predicting real-valued attributes requires balancing the conflicting demands of simplicity and accuracy. Furthermore, the theory indicates precisely how these conflicting demands must be balanced, in order to minimize cross-validation error. A general theory is presented, then it is developed in detail for linear regression and instance-based learning  相似文献   

17.
18.
The algebraic language of category theory is the setting for a theory of reachability, observability and realization for a new class of systems, the decomposable systems, which generalize linear systems and group machines. Linearity is shown to play no role in the core results of Kalman's theory of linear systems. Moreover, we provide a new duality theory. The category-theoretic tools of powers, copowers and image factorization provide the foundations for this study. Even though the results are more general, the proofs are simpler than those of the classical linear theory, once the basic category theory, presented here as a self-contained exposition, has been mastered.  相似文献   

19.
In this paper, we present an approach to system identification based on viewing identification as a problem in statistical learning theory. Apparently, this approach was first mooted in [E. Weyer, R.C. Williamson, I. Mareels, Sample complexity of least squares identification of FIR models, in: Proceedings of the 13th World Congress of IFAC, San Francisco, CA, July 1996, pp. 239–244]. The main motivation for initiating such a program is that traditionally system identification theory provide asymptotic results. In contrast, statistical learning theory is devoted to the derivation of finite-time estimates. If system identification is to be combined with robust control theory to develop a sound theory of indirect adaptive control, it is essential to have finite-time estimates of the sort provided by statistical learning theory. As an illustration of the approach, a result is derived showing that in the case of systems with fading memory, it is possible to combine standard results in statistical learning theory (suitably modified to the present situation) with some fading memory arguments to obtain finite-time estimates of the desired kind. It is also shown that the time series generated by a large class of BIBO stable nonlinear systems has a property known as β-mixing. As a result, earlier results of [E. Weyer, Finite sample properties of system identification of ARX models under mixing conditions, Automatica, 36 (9) (2000) 1291–1299] can be applied to many more situations than shown in that paper.  相似文献   

20.
A theory, in this context, is a Boolean formula; it is used to classify instances, or truth assignments. Theories can model real-world phenomena, and can do so more or less correctly. The theory revision, or concept revision, problem is to correct a given, roughly correct concept. This problem is considered here in the model of learning with equivalence and membership queries. A revision algorithm is considered efficient if the number of queries it makes is polynomial in the revision distance between the initial theory and the target theory, and polylogarithmic in the number of variables and the size of the initial theory. The revision distance is the minimal number of syntactic revision operations, such as the deletion or addition of literals, needed to obtain the target theory from the initial theory. Efficient revision algorithms are given for Horn formulas and read-once formulas, where revision operators are restricted to deletions of variables or clauses, and for parity formulas, where revision operators include both deletions and additions of variables. We also show that the query complexity of the read-once revision algorithm is near-optimal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号