首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 12 毫秒
1.
Prediction in a small-sized sample with a large number of covariates, the “small n, large p” problem, is challenging. This setting is encountered in multiple applications, such as in precision medicine, where obtaining additional data can be extremely costly or even impossible, and extensive research effort has recently been dedicated to finding principled solutions for accurate prediction. However, a valuable source of additional information, domain experts, has not yet been efficiently exploited. We formulate knowledge elicitation generally as a probabilistic inference process, where expert knowledge is sequentially queried to improve predictions. In the specific case of sparse linear regression, where we assume the expert has knowledge about the relevance of the covariates, or of values of the regression coefficients, we propose an algorithm and computational approximation for fast and efficient interaction, which sequentially identifies the most informative features on which to query expert knowledge. Evaluations of the proposed method in experiments with simulated and real users show improved prediction accuracy already with a small effort from the expert.  相似文献   

2.
This paper presents an information-based logic that is applied to the analysis of entailment, implicature and presupposition in natural language. The logic is very fine-grained and is able to make distinctions that are outside the scope of classical logic. It is independently motivated by certain properties of natural human reasoning, namely partiality, paraconsistency, relevance, and defeasibility: once these are accounted for, the data on implicature and presupposition comes quite naturally.The logic is based on the family of semantic spaces known as bilattices, originally proposed by Ginsberg (1988), and used extensively by Fitting (1989, 1992). Specifically, the logic is based on a subset of bilattices that I call evidential bilattices, constructed as the Cartesian product of certain algebras with themselves. The specific details of the epistemic agent approach of the logical system is derived from the work of Belnap (1975, 1977), augmented by the use of evidential links for inferencing. An important property of the system is that it has been implemented using an extension of Fitting's work on bilattice logic programming (1989, 1991) to build a model-based inference engine for the augmented Belnap logic. This theorem prover is very efficient for a reasonably wide range of inferences.A shorter version of this material was originally presented at the Fifth International Symposium on Logic and Language, Noszvaj, Hungary, 1994. The author is now in the Mathematical Reasoning Group, Department of Artificial Intelligence, University of Edinburgh, 80 South Bridge, Edinburgh EH1 1HN, U.K.  相似文献   

3.
Kozlov  A.V. Singh  J.P. 《Computer》1996,29(12):33-40
Probabilistic inference is becoming an integral part of decision-making systems, but it is so computationally intensive that it is often impractical. The authors report on the effectiveness of speeding up this technique by exploiting its parallelism  相似文献   

4.
In this paper we present a methodology for extracting information about lexical translation equivalences from the machine readable versions of conventional dictionaries (MRDs), and describe a series of experiments on semi-automatic construction of a linked multilinguallexical knowledge base for English, Dutch, and Spanish. We discuss the advantages and limitations of using MRDs that this has revealed, and some strategies we have developed to cover gaps where no direct translation can be found.  相似文献   

5.
Online learning is a key methodology for expert systems to gracefully cope with dynamic environments. In the context of neuro-fuzzy systems, research efforts have been directed toward developing online learning methods that can update both system structure and parameters on the fly. However, the current online learning approaches often rely on heuristic methods that lack a formal statistical basis and exhibit limited scalability in the face of large data stream. In light of these issues, we develop a new Sequential Probabilistic Learning for Adaptive Fuzzy Inference System (SPLAFIS) that synergizes the Bayesian Adaptive Resonance Theory (BART) and Rule-Wise Decoupled Extended Kalman Filter (RDEKF) to generate the rule base structure and refine its parameters, respectively. The marriage of the BART and RDEKF methods, both of which are built upon the maximum a posteriori (MAP) principle rooted in the Bayes’ rule, offers a comprehensive probabilistic treatment and an efficient way for online structural and parameter learning suitable for large, dynamic data stream. To manage the model complexity without sacrificing its predictive accuracy, SPLAFIS also includes a simple procedure to prune inconsequential rules that have little contribution over time. The predictive accuracy, structural simplicity, and scalability of the proposed model have been exemplified in empirical studies using chaotic time series, stock index, and large nonlinear regression datasets.  相似文献   

6.
A recent and effective approach to probabilistic inference calls for reducing the problem to one of weighted model counting (WMC) on a propositional knowledge base. Specifically, the approach calls for encoding the probabilistic model, typically a Bayesian network, as a propositional knowledge base in conjunctive normal form (CNF) with weights associated to each model according to the network parameters. Given this CNF, computing the probability of some evidence becomes a matter of summing the weights of all CNF models consistent with the evidence. A number of variations on this approach have appeared in the literature recently, that vary across three orthogonal dimensions. The first dimension concerns the specific encoding used to convert a Bayesian network into a CNF. The second dimensions relates to whether weighted model counting is performed using a search algorithm on the CNF, or by compiling the CNF into a structure that renders WMC a polytime operation in the size of the compiled structure. The third dimension deals with the specific properties of network parameters (local structure) which are captured in the CNF encoding. In this paper, we discuss recent work in this area across the above three dimensions, and demonstrate empirically its practical importance in significantly expanding the reach of exact probabilistic inference. We restrict our discussion to exact inference and model counting, even though other proposals have been extended for approximate inference and approximate model counting.  相似文献   

7.
Recommendation is an important application that is employed on the Web. In this paper, we propose a method for recommending items to a user by extending a probabilistic inference model in information retrieval. We regard the user’s preference as the query, an item as a document, and explicit and implicit factors as index terms. Additional information sources can be added to the probabilistic inference model, particularly belief networks. The proposed method also uses the belief network model to recommend items by combining expert information. Experimental results on real-world data sets show that the proposed method can improve recommendation effectiveness.  相似文献   

8.
Conclusion this is an interesting and informative book with much to recommend it. It covers a great deal of ground in discussion of ideas and presentation of an actual implementation, but I believe the major contribution to be in four areas:- In presenting a system whose syntax is based on principles and parameters, Dorr provides an interesting challenge to the standard rule-based approaches which are broadly unification-based.- Dorr presents an interlingua which appears to have relatively solid linguistic motivation, and for which there is a very systematic mapping to and from text. This directly addresses two of the standard objections to interlingual approaches: arbitrariness and lack of systematicity. Unfortunately, the range of phenomena she considers is too limited to address the other major objection that is normally raised in relation to interlingual approaches: that of lack of coverage.- Dorr presents a classification of translation divergences. I believe such a classification to be worthwhile, and I take this is a useful beginning. However, I find the actual classification proposed too broad, and theory-dependent. Moreover Dorr's claims about completeness of the classification are not convincing.- Dorr presents a solution to various translation divergences via parameterization of the interlingual representation. Here I believe reservations about conceptual coherence of the representation and the generality of the approach are appropriate.I am grateful to Harold Somers and Bonnie Dorr for criticisms and corrections of an earlier version. Of course, the remaining deficiencies are entirely my fault.  相似文献   

9.
The probabilistic reasoning scheme of Dempster-Shafer theory provides a remarkably efficient bug identification algorithm for a hierarchical Buggy model. In the particular Buggy model generated by the repair theory of Brown & Van Lehn (1980, A generative theory of bugs in procedural skills, Cognitive Science, 2, 155-192), both Shafer & Logan (1987, Implementing Dempster's rule for hierarchical evidence, Artificial Intelligence, 33 , 271-298) and Gordon & Shortliffe (1985, A method for managing evidential reasoning in a hierarchical hypothesis space, Artificial Intelligence, 26, 324-357) schemes have provided almost identical computational accuracy although the latter involves an approximation of a "smallest superset". If n denotes the number of bugs to be identified, the computational complexity of the two schemes, originally of O (n4/3) and O(n2) respectively, can be improved to O(n) using the simplified top-down calculation scheme whereby from among all the nodes we first locate the particular "parental" node to which the bug belongs and then the bug itself among the sibs within the node. On average, about 5-7 problems are adequate to raise the belief function of the bug to 95% level, based on the evidential reasoning schemes.  相似文献   

10.
We identify various situations in probabilistic intelligent systems in which conditionals (rules) as mathematical entities as well as their conditional logic operations are needed. In discussing Bayesian updating procedure and belief function construction, we provide a new method for modeling if… then rules as Boolean elements, and yet, compatible with conditional probability quantifications. © 1994 John Wiley & Sons, Inc.  相似文献   

11.
Innovations in Systems and Software Engineering - We present AQUA, a new probabilistic inference algorithm that operates on probabilistic programs with continuous posterior distributions. AQUA...  相似文献   

12.
Thesauri and controlled vocabularies facilitate access to digital collections by explicitly representing the underlying principles of organization. Translation of such resources into multiple languages is an important component for providing multilingual access. However, the specificity of vocabulary terms in most thesauri precludes fully-automatic translation using general-domain lexical resources. In this paper, we present an efficient process for leveraging human translations to construct domain-specific lexical resources. This process is illustrated on a thesaurus of 56,000 concepts used to catalog a large archive of oral histories. We elicited human translations on a small subset of concepts, induced a probabilistic phrase dictionary from these translations, and used the resulting resource to automatically translate the rest of the thesaurus. Two separate evaluations demonstrate the acceptability of the automatic translations and the cost-effectiveness of our approach.
Jimmy LinEmail:
  相似文献   

13.
This paper describes the lexical-semantic basis for UNITRAN, an implemented scheme for translating Spanish, English, and German bidirectionally. Two claims made here are that the current representation handles many distinctions (or divergences) across languages without recourse to language-specific rules and that the lexical-semantic framework provides the basis for a systematic mapping between the interlingua and the syntactic structure. The representation adopted is an extended version of lexical conceptual structure which is suitable to the task of translating between divergent structures for two reasons: (1) it provides an abstraction of language-independent properties from structural idiosyncrasies; and (2) it is compositional in nature. The lexical-semantic approach addresses the divergence problem by using a linguistically grounded mapping that has access to parameter settings in the lexicon. We will examine a number of relevant issues including the problem of defining primitives, the issue of interlinguality, the cross-linguistic coverage of the system, and the mapping between the syntactic structure and the interlingua. A detailed example of lexical-semantic composition will be presented.  相似文献   

14.
Fox EX- and BC-type identification, one-sided error probabilistic inference and reliable frequency identification on sets of functions are introduced. In particular, we relate the one to the other and characterize one-sided error probabilistic inference to exactly coincide with reliable frequency identification, on any setM. Moreover, we show that reliable EX and BC-frequency inference forms a new discrete hierarchy having the breakpoints 1,1/2, 1/3 ,….  相似文献   

15.
In this paper, we obtain some new inequalities by means of the mean inequalities of random variables, which include generalizations of the Greub–Rheinboldt inequality.  相似文献   

16.
Bayesian networks (BN) are a powerful tool for various data-mining systems. The available methods of probabilistic inference from learning data have shortcomings such as high computation complexity and cumulative error. This is due to a partial loss of information in transition from empiric information to conditional probability tables. The paper presents a new simple and exact algorithm for probabilistic inference in BN from learning data. __________ Translated from Kibernetika i Sistemnyi Analiz, No. 3, pp. 93–99, May–June 2007.  相似文献   

17.
18.
Automated mosaics via topology inference   总被引:10,自引:0,他引:10  
The authors present a complete approach for the automated construction of mosaics from images and video using topology inference, local and global alignment, and compositing  相似文献   

19.
Many important real-world applications of machine learning, statistical physics, constraint programming and information theory can be formulated using graphical models that involve determinism and cycles. Accurate and efficient inference and training of such graphical models remains a key challenge. Markov logic networks (MLNs) have recently emerged as a popular framework for expressing a number of problems which exhibit these properties. While loopy belief propagation (LBP) can be an effective solution in some cases; unfortunately, when both determinism and cycles are present, LBP frequently fails to converge or converges to inaccurate results. As such, sampling based algorithms have been found to be more effective and are more popular for general inference tasks in MLNs. In this paper, we introduce Generalized arc-consistency Expectation Maximization Message-Passing (GEM-MP), a novel message-passing approach to inference in an extended factor graph that combines constraint programming techniques with variational methods. We focus our experiments on Markov logic and Ising models but the method is applicable to graphical models in general. In contrast to LBP, GEM-MP formulates the message-passing structure as steps of variational expectation maximization. Moreover, in the algorithm we leverage the local structures in the factor graph by using generalized arc consistency when performing a variational mean-field approximation. Thus each such update increases a lower bound on the model evidence. Our experiments on Ising grids, entity resolution and link prediction problems demonstrate the accuracy and convergence of GEM-MP over existing state-of-the-art inference algorithms such as MC-SAT, LBP, and Gibbs sampling, as well as convergent message passing algorithms such as the concave–convex procedure, residual BP, and the L2-convex method.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号