首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Stochastic volatility (SV) models have been considered as a real alternative to time-varying volatility of the ARCH family. Existing asymmetric SV (ASV) models treat volatility asymmetry via the leverage effect hypothesis. Generalised ASV models that take account of both volatility asymmetry and normality violation expressed simultaneously by skewness and excess kurtosis are introduced. The new generalised ASV models are estimated using the Bayesian Markov Chain Monte Carlo approach for parametric and log-volatility estimation. By using simulated and real financial data series, the new models are compared to existing SV models for their statistical properties, and for their estimation performance in within and out-of-sample periods. Results show that there is much to gain from the introduction of the generalised ASV models.  相似文献   

2.
In a generalised linear model the mean of each observation is assumed to be related to its linear predictor by a known link function. In some cases this link function may not be known exactly, but can be assumed to be of some general ‘parametric’ form. We show how generalised linear models can be extended to fit models with such link functions. Examples given include the two-parameter logistic and estimating the exponent of the Box-Cox link. GLIM-3 coding is supplied.  相似文献   

3.
This article presents a new approximation method for computing arguments or explanations in the context of logic-based argumentative or abductive reasoning. The algorithm can be interrupted at any time returning the solution found so far. The quality of the approximation increases monotonically when more computational resources are available. The method is based on cost functions and returns lower and upper bounds.Research supported by (1) Alexander von Humboldt Foundation, (2) German Federal Ministry of Education and Research, (3) German Program for the Investment in the future.  相似文献   

4.
We study the relations between the expressive power of non-monotonic formalisms and polynomial-time computability in the framework of stable models semantics. While the problem of deciding whether a logic program has a total stable model isNP-complete, we introduce a polynomial-time algorithm that generates such a model for several important classes of programs, that are discussed in this paper. In the general case, the algorithm generates a (not necessarily total)p-stable model of the input program.This work was partially written while the first author was visiting at the UCLA Computer Science Department. The first author has been supported in part by the Consiglio Nazionale delle Ricerche of Italy.  相似文献   

5.
In stochastic volatility (SV) models, asset returns conditional on the latent volatility are usually assumed to have a normal, Student-t or exponential power (EP) distribution. An earlier study uses a generalised t (GT) distribution for the conditional returns and the results indicate that the GT distribution provides a better model fit to the Australian Dollar/Japanese Yen daily exchange rate than the Student-t distribution. In fact, the GT family nests a number of well-known distributions including the commonly used normal, Student-t and EP distributions. This paper extends the SV model with a GT distribution by incorporating general volatility asymmetry. We compare the empirical performance of nested distributions of the GT distribution as well as different volatility asymmetry specifications. The new asymmetric GT SV models are estimated using the Bayesian Markov chain Monte Carlo (MCMC) method to obtain parameter and log-volatility estimates. By using daily returns from the Standard and Poors (S&P) 500 index, we investigate the effects of the specification of error distributions as well as volatility asymmetry on parameter and volatility estimates. Results show that the choice of error distributions has a major influence on volatility estimation only when volatility asymmetry is not accounted for.  相似文献   

6.
A default strategy for fully Bayesian model determination for generalised linear mixed models (GLMMs) is considered which addresses the two key issues of default prior specification and computation. In particular, the concept of unit-information priors is extended to the parameters of a GLMM. A combination of Markov chain Monte Carlo (MCMC) and Laplace approximations is used to compute approximations to the posterior model probabilities to find a subset of models with high posterior model probability. Bridge sampling is then used on the models in this subset to approximate the posterior model probabilities more accurately. The strategy is applied to four examples.  相似文献   

7.
M.C.  Su  E.   《Automatica》2004,40(12):2029-2041
In any real-life identification problem, only a finite number of data points is available. On the other hand, almost all results in stochastic identification pertain to asymptotic properties, that is they tell us what happens when the number of data points tends to infinity. In this paper we consider the problem of assessing the quality of the estimates identified from a finite number of data points. We focus on least squares identification of generalised FIR models and develop a method to produce a bound on the uncertainty in the parameter estimate. The method is data driven and based on tests involving permuted data sets. Moreover, it does not require that the true system is in the model class.  相似文献   

8.
This paper illustrates extensively the theoretical properties, the implementation issues, and the programming style underlying finitary programs. They are a class of normal logic programs whose consequences under the stable model semantics can be effectively computed, despite the fact that finitary programs admit function symbols (hence infinite domains) and recursion. From a theoretical point of view, finitary programs are interesting because they enjoy properties that are extremely unusual for a nonmonotonic formalism, such as compactness. From the application point of view, the theory of finitary programs shows how the existing technology for answer set programming can be extended from problem solving below the second level of the polynomial hierarchy to all semidecidable problems. Moreover, finitary programs allow a more natural encoding of recursive data structures and may increase the performance of credulous reasoners.  相似文献   

9.
Different ways of representing probabilistic relationships among the attributes of a domain ar examined, and it is shown that the nature of domain relationships used in a representation affects the types of reasoning objectives that can be achieved. Two well-known formalisms for representing the probabilistic among attributes of a domain. These are the dependence tree formalism presented by C.K. Chow and C.N. Liu (1968) and the Bayesian networks methodology presented by J. Pearl (1986). An example is used to illustrate the nature of the relationships and the difference in the types of reasoning performed by these two representations. An abductive type of reasoning objective that requires use of the known qualitative relationships of the domain is demonstrated. A suitable way to represent such qualitative relationships along with the probabilistic knowledge is given, and how an explanation for a set of observed events may be constituted is discussed. An algorithm for learning the qualitative relationships from empirical data using an algorithm based on the minimization of conditional entropy is presented  相似文献   

10.
Though the semantics of nonmonotonic logic programming has been studied extensively, relatively little work has been done on operational aspects of these semantics. In this paper, we develop techniques to compute the well-founded model of a logic program. We describe a prototype implementation and show, based on experimental results, that our technique is more efficient than the standard alternating fixpoint computation. Subsequently, we develop techniques to compute the set of all stable models of a deductive database. These techniques first compute the well-founded semantics and then use an intelligent branch and bound strategy to compute the stable models. We report on our implementation, as well as on experiments that we have conducted on the efficiency of our approach  相似文献   

11.
Real-world problems often require purely deductive reasoning to be supported by other techniques that can cope with noise in the form of incomplete and uncertain data. Abductive inference tackles incompleteness by guessing unknown information, provided that it is compliant with given constraints. Probabilistic reasoning tackles uncertainty by weakening the sharp logical approach. This work aims at bringing both together and at further extending the expressive power of the resulting framework, called Probabilistic Expressive Abductive Logic Programming (PEALP). It adopts a Logic Programming perspective, introducing several kinds of constraints and allowing to set a degree of strength on their validity. Procedures to handle both extensions, compatibly with standard abductive and probabilistic frameworks, are also provided.  相似文献   

12.
This paper presents several stable adaptive algorithms for the control of hybrid and discrete systems in which the control parameters are adjusted at rates slower than those at which the systems operate. Continuous algorithms of an integral type, recently suggested in the literature [5] are also shown to belong to this class. From a practical standpoint, the infrequent adjustment of the control parameters makes for more robust adaptive control while from a theoretical point of view, the algorithms are attractive since they provide a unified framework for the design of continuous, hybrid, and discrete adaptive systems. Simulation results are included to indicate the type of responses that can be expected using the different algorithms.  相似文献   

13.
The results obtained for a class of models that are structurally stable in periodic motion were presented, and, in particular, the notion of structural stability was introduced. Feasibility of such structurally stable model was proved. For a model featuring space-time symmetry, the sufficient structural stability conditions were established, and examples were presented.  相似文献   

14.
Several artificial intelligence architectures and systems based on “deep” models of a domain have been proposed, in particular for the diagnostic task. These systems have several advantages over traditional knowledge based systems, but they have a main limitation in their computational complexity. One of the ways to face this problem is to rely on a knowledge compilation phase, which produces knowledge that can be used more effectively with respect to the original one. We show how a specific knowledge compilation approach can focus reasoning in abductive diagnosis, and, in particular, can improve the performances of AID, an abductive diagnosis system. The approach aims at focusing the overall diagnostic cycle in two interdependent ways: avoiding the generation of candidate solutions to be discarded a posteriori and integrating the generation of candidate solutions with discrimination among different candidates. Knowledge compilation is used off-line to produce operational (i.e., easily evaluated) conditions that embed the abductive reasoning strategy and are used in addition to the original model, with the goal of ruling out parts of the search space or focusing on parts of it. The conditions are useful to solve most cases using less time for computing the same solutions, yet preserving all the power of the model-based system for dealing with multiple faults and explaining the solutions. Experimental results showing the advantages of the approach are presented  相似文献   

15.
The well-founded semantics and the stable model semantics capture intuitions of the skeptical and credulous semantics in nonmonotonic reasoning, respectively. They represent the two dominant proposals for the declarative semantics of deductive databases and logic programs. However, neither semantics seems to be suitable for all applications. We have developed an efficient implementation of goal-oriented effective query evaluation under the well-founded semantics. It produces a residual program for subgoals that are relevant to a query, which contains facts for true instances and clauses with body literals for undefined instances. We present a simple method of stable model computation that can be applied to the residual program of a query to derive answers with respect to stable models. The method incorporates both forward and backward chaining to propagate the assumed truth values of ground atoms, and derives multiple stable models through backtracking. Users are able to request that only stable models satisfying certain conditions be computed. A prototype has been developed that provides integrated query evaluation under the well-founded semantics, the stable models, and ordinary Prolog execution. We describe the user interface of the prototype and present some experimental results  相似文献   

16.
Recent advances in educational technologies and the wide-spread use of computers in schools have fueled innovations in test construction and analysis. As the measurement accuracy of a test depends on the quality of the items it includes, item selection procedures play a central role in this process. Mathematical programming and the item response theory (IRT) are often used in automating this task. However, when the item bank is very large, the number of item combinations increases exponentially and item selection becomes more tedious. To alleviate the computational complexity, researchers have previously applied heuristic search and machine learning approaches, including neural networks, to solve similar problems. This paper proposes a novel approach that uses abductive network modeling to automatically identify the most-informative subset of test items that can be used to effectively assess the examinees without seriously degrading accuracy. Abductive machine learning automatically selects only effective model inputs and builds an optimal network model of polynomial functional nodes that minimizes a predicted squared error criterion. Using a training dataset of 1500 cases (examinees) and 45 test items, the proposed approach automatically selected only 12 items which classified an evaluation population of 500 cases with 91% accuracy. Performance is examined for various levels of model complexity and compared with that of statistical IRT-based techniques. Results indicate that the proposed approach significantly reduces the number of test items required while maintaining acceptable test quality.  相似文献   

17.
18.
Although there are many arguments that logic is an appropriate tool for artificial intelligence, there has been a perceived problem with the monotonicity of classical logic. This paper elaborates on the idea that reasoning should be viewed as theory formation where logic tells us the consequences of our assumptions. The two activities of predicting what is expected to be true and explaining observations are considered in a simple theory formation framework. Properties of each activity are discussed, along with a number of proposals as to what should be predicted or accepted as reasonable explanations. An architecture is proposed to combine explanation and prediction into one coherent framework. Algorithms used to implement the system as well as examples from a running implementation are given.  相似文献   

19.
OBJECTIVE: To evaluate the consistency of diagnostic data extracted from narrative electronic patient record (EPR) notes compared with the data from a patient administrative system (PAS). To assess potential benefit of using EPR notes as source of diagnosis data and as basis for case identification. DESIGN: Construction of a computer algorithm to extract ICD-9 codes from narrative EPR notes. Assessment of consistency and reliability of the diagnostic codes retrieved from EPR notes and PAS. Estimation of efficiency of case identification based on data from PAS and EPR. RESULTS: Diagnosis codes were retrieved from PAS with sensitivity of 0.989 and the positive predictive value (PPV) was 0.993. Codes were retrieved from EPR with sensitivity of 0.908 and PPV of 0.990. Combining these two sources increased sensitivity to 0.999. CONCLUSION: Discharge diagnoses were easily extracted from narrative EPR notes by automatic methods. Information extracted from record notes was not significantly different from the corresponding data in PAS, but EPR was incomplete as compared with PAS. Utilizing data extracted from EPR improved case identification significantly.  相似文献   

20.
In subspace identification methods, the system matrices are usually estimated by least squares, based on estimated Kalman filter state sequences and the observed inputs and outputs. For a finite number of data points, the estimated system matrix is not guaranteed to be stable, even when the true linear system is known to be stable. In this paper, stability is imposed by using regularization. The regularization term used here is the trace of a matrix which involves the dynamical system matrix and a positive (semi) definite weighting matrix. The amount of regularization can be determined from a generalized eigenvalue problem. The data augmentation method of Chui and Maciejowski (1996) is obtained by using specific choices for the weighting matrix in the regularization term  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号