首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 0 毫秒
1.
This paper studies a heavy-tailed stochastic volatility (SV) model with leverage effect, where a bivariate Student-t distribution is used to model the error innovations of the return and volatility equations. Choy et al. (2008) studied this model by expressing the bivariate Student-t distribution as a scale mixture of bivariate normal distributions. We propose an alternative formulation by first deriving a conditional Student-t distribution for the return and a marginal Student-t distribution for the log-volatility and then express these two Student-t distributions as a scale mixture of normal (SMN) distributions. Our approach separates the sources of outliers and allows for distinguishing between outliers generated by the return process or by the volatility process, and hence is an improvement over the approach of Choy et al. (2008). In addition, it allows an efficient model implementation using the WinBUGS software. A simulation study is conducted to assess the performance of the proposed approach and its comparison with the approach by Choy et al. (2008). In the empirical study, daily exchange rate returns of the Australian dollar to various currencies and daily stock market index returns of various international stock markets are analysed. Model comparison relies on the Deviance Information Criterion and convergence diagnostic is monitored by Geweke’s convergence test.  相似文献   

2.
This paper presents a type of heavy-tailed market microstructure models with the scale mixtures of normal distributions (MM-SMN), which include two specific sub-classes, viz. the slash and the Student-t distributions. Under a Bayesian perspective, the Markov Chain Monte Carlo (MCMC) method is constructed to estimate all the parameters and latent variables in the proposed MM-SMN models. Two evaluating indices, namely the deviance information criterion (DIC) and the test of white noise hypothesis on the standardised residual, are used to compare the MM-SMN models with the classic normal market microstructure (MM-N) model and the stochastic volatility models with the scale mixtures of normal distributions (SV-SMN). Empirical studies on daily stock return data show that the MM-SMN models can accommodate possible outliers in the observed returns by use of the mixing latent variable. These results also indicate that the heavy-tailed MM-SMN models have better model fitting than the MM-N model, and the market microstructure model with slash distribution (MM-s) has the best model fitting. Finally, the two evaluating indices indicate that the market microstructure models with three different distributions are superior to the corresponding stochastic volatility models.  相似文献   

3.
Diffusion processes governed by stochastic differential equations (SDEs) are a well-established tool for modelling continuous time data from a wide range of areas. Consequently, techniques have been developed to estimate diffusion parameters from partial and discrete observations. Likelihood-based inference can be problematic as closed form transition densities are rarely available. One widely used solution involves the introduction of latent data points between every pair of observations to allow a Euler-Maruyama approximation of the true transition densities to become accurate. In recent literature, Markov chain Monte Carlo (MCMC) methods have been used to sample the posterior distribution of latent data and model parameters; however, naive schemes suffer from a mixing problem that worsens with the degree of augmentation. A global MCMC scheme that can be applied to a large class of diffusions and whose performance is not adversely affected by the number of latent values is therefore explored. The methodology is illustrated by estimating parameters governing an auto-regulatory gene network, using partial and discrete data that are subject to measurement error.  相似文献   

4.
Optimal Experiment Design (OED) is a well-developed concept for regression problems that are linear-in-the-parameters. In case of experiment design to identify nonlinear Takagi-Sugeno (TS) models, non-model-based approaches or OED restricted to the local model parameters (assuming the partitioning to be given) have been proposed. In this article, a Fisher Information Matrix (FIM) based OED method is proposed that considers local model and partition parameters. Due to the nonlinear model, the FIM depends on the model parameters that are subject of the subsequent identification. To resolve this paradoxical situation, at first a model-free space filling design (such as Latin Hypercube Sampling) is carried out. The collected data permits making design decisions such as determining the number of local models and identifying the parameters of an initial TS model. This initial TS model permits a FIM-based OED, such that data is collected which is optimal for a TS model. The estimates of this first stage will in general not be ideal. To become robust against parameter mismatch, a sequential optimal design is applied. In this work the focus is on D-optimal designs. The proposed method is demonstrated for three nonlinear regression problems: an industrial axial compressor and two test functions.  相似文献   

5.
The Bayesian implementation of finite mixtures of distributions has been an area of considerable interest within the literature. Computational advances on approximation techniques such as Markov chain Monte Carlo (MCMC) methods have been a keystone to Bayesian analysis of mixture models. This paper deals with the Bayesian analysis of finite mixtures of two particular types of multidimensional distributions: the multinomial and the negative-multinomial ones. A unified framework addressing the main topics in a Bayesian analysis is developed for the case with a known number of component distributions. In particular, theoretical results and algorithms to solve the label-switching problem are provided. An illustrative example is presented to show that the proposed techniques are easily applied in practice.  相似文献   

6.
Recently hybrid generative discriminative approaches have emerged as an efficient knowledge representation and data classification engine. However, little attention has been devoted to the modeling and classification of non-Gaussian and especially proportional vectors. Our main goal, in this paper, is to discover the true structure of this kind of data by building probabilistic kernels from generative mixture models based on Liouville family, from which we develop the Beta-Liouville distribution, and which includes the well-known Dirichlet as a special case. The Beta-Liouville has a more general covariance structure than the Dirichlet which makes it more practical and useful. Our learning technique is based on a principled purely Bayesian approach which resulted models are used to generate support vector machine (SVM) probabilistic kernels based on information divergence. In particular, we show the existence of closed-form expressions of the Kullback-Leibler and Rényi divergences between two Beta-Liouville distributions and then between two Dirichlet distributions as a special case. Through extensive simulations and a number of experiments involving synthetic data, visual scenes and texture images classification, we demonstrate the effectiveness of the proposed approaches.  相似文献   

7.
This paper addresses the problem of proportional data modeling and clustering using mixture models, a problem of great interest and of importance for many practical pattern recognition, image processing, data mining and computer vision applications. Finite mixture models are broadly applicable to clustering problems. But, they involve the challenging problem of the selection of the number of clusters which requires a certain trade-off. The number of clusters must be sufficient to provide the discriminating capability between clusters required for a given application. Indeed, if too many clusters are employed overfitting problems may occur and if few are used we have a problem of underfitting. Here we approach the problem of modeling and clustering proportional data using infinite mixtures which have been shown to be an efficient alternative to finite mixtures by overcoming the concern regarding the selection of the optimal number of mixture components. In particular, we propose and discuss the consideration of infinite Liouville mixture model whose parameter values are fitted to the data through a principled Bayesian algorithm that we have developed and which allows uncertainty in the number of mixture components. Our experimental evaluation involves two challenging applications namely text classification and texture discrimination, and suggests that the proposed approach can be an excellent choice for proportional data modeling.  相似文献   

8.
The advent of mixture models has opened the possibility of flexible models which are practical to work with. A common assumption is that practitioners typically expect that data are generated from a Gaussian mixture. The inverted Dirichlet mixture has been shown to be a better alternative to the Gaussian mixture and to be of significant value in a variety of applications involving positive data. The inverted Dirichlet is, however, usually undesirable, since it forces an assumption of positive correlation. Our focus here is to develop a Bayesian alternative to both the Gaussian and the inverted Dirichlet mixtures when dealing with positive data. The alternative that we propose is based on the generalized inverted Dirichlet distribution which offers high flexibility and ease of use, as we show in this paper. Moreover, it has a more general covariance structure than the inverted Dirichlet. The proposed mixture model is subjected to a fully Bayesian analysis based on Markov Chain Monte Carlo (MCMC) simulation methods namely Gibbs sampling and Metropolis–Hastings used to compute the posterior distribution of the parameters, and on Bayesian information criterion (BIC) used for model selection. The adoption of this purely Bayesian learning choice is motivated by the fact that Bayesian inference allows to deal with uncertainty in a unified and consistent manner. We evaluate our approach on the basis of two challenging applications concerning object classification and forgery detection.  相似文献   

9.
In this paper, an original result in terms of a sufficient condition to test the identifiability of nonlinear delayed-differential models with constant delays and multi-inputs is given. The identifiability is studied for the linearized system and a criterion for linear systems with constant delays is provided, from which the identifiability of the original nonlinear system can be proved. This result is obtained by combining a classical identifiability result for nonlinear ordinary differential systems due to Grewal and Glover (1976) with the identifiability of linear delayed-differential models developed by Orlov, Belkoura, Richard, and Dambrine (2002). This paper is a generalization of Denis-Vidal, Jauberthie, and Joly-Blanchard (2006), which deals with the specific case of nonlinear delayed-differential models with two delays and a single input.  相似文献   

10.
Various attempts have been made to control the depth of anaesthesia by observing different variables. In some studies, depth of anaesthesia has been correlated with inferential parameters and the control has been made through these inferred parameters. No single system has been reported which provides a fully developed architecture to control the depth of anaesthesia. This study is concerned with the development of controllers and patient models via Artificial Neural Networks and regression analysis. Two types of data sets were used for the training and development of models and controllers. The first set was for spontaneously breathing and the second set for ventilated patients. All of the controllers and patient models gave satisfactory results when tested individually. Later these two sets of controllers and patient models were studied in closed-loop modes. The robustness to the sensitivity of the regression patient model was also investigated. Various tests were performed with these closed-loop situations. Results and performance of these tests are discussed in the paper.  相似文献   

11.
Kevin Burns 《Information Sciences》2006,176(11):1570-1589
Bayesian inference provides a formal framework for assessing the odds of hypotheses in light of evidence. This makes Bayesian inference applicable to a wide range of diagnostic challenges in the field of chance discovery, including the problem of disputed authorship that arises in electronic commerce, counter-terrorism and other forensic applications. For example, when two documents are so similar that one is likely to be a hoax written from the other, the question is: Which document is most likely the source and which document is most likely the hoax? Here I review a Bayesian study of disputed authorship performed by a biblical scholar, and I show that the scholar makes critical errors with respect to several issues, namely: Causal Basis, Likelihood Judgment and Conditional Dependency. The scholar’s errors are important because they have a large effect on his conclusions and because similar errors often occur when people, both experts and novices, are faced with the challenges of Bayesian inference. As a practical solution, I introduce a graphical system designed to help prevent the observed errors. I discuss how this decision support system applies more generally to any problem of Bayesian inference, and how it differs from the graphical models of Bayesian Networks.  相似文献   

12.
Mixture of experts (ME) models comprise a family of modular neural network architectures aiming at distilling complex problems into simple subtasks. This is done by deploying a separate gating module for softly dividing the input space into overlapping regions to be each assigned to one or more expert networks. Conversely, support vector machines (SVMs) refer to kernel-based methods, neural-network-alike models that constitute an approximate implementation of the structural risk minimization principle. Such learning machines follow the simple, but powerful idea of nonlinearly mapping input data into high-dimensional feature spaces wherein a linear decision surface discriminating different regions is properly designed. In this work, we formally characterize and empirically evaluate a novel approach, named as Mixture of Support Vector Machine Experts (MSVME), whose main purpose is to combine the complementary properties of both SVM and ME models. In the formal characterization, an algorithm based on a maximum likelihood criterion is considered for the MSVME training, and we demonstrate that it is possible to train each expert based on an SVM perspective. Regarding the empirical evaluation, simulation results involving nonlinear dynamic system identification problems are reported, contrasting the performance shown by the MSVME approach with that exhibited by conventional SVM and ME models.  相似文献   

13.
This paper deals with two main contributions. The first is the definition of an accurate nonlinear model of a section of a real 320 MW power plant. Within the framework of this global method, the second contribution is the modelling of the most common faults that may occur in plants like the one considered in the paper. All these models allow the simulation of the system both under normal working conditions and under anomalous conditions due to the occurrence of one of the faults modelled. The simulated model can be integrated with the automation system of the plant and used in real time, thus providing the plant technicians with crucial information on the plant behaviour, for instance, fault detection and diagnosis can be accomplished in a natural way. Simulation results and comparisons with real data show the effectiveness of the proposed approach.  相似文献   

14.
This article describes a new software for modeling correlated binary data based on orthogonalized residuals, a recently developed estimating equations approach that includes, as a special case, alternating logistic regressions. The software is flexible with respect to fitting in that the user can choose estimating equations for association models based on alternating logistic regressions or orthogonalized residuals, the latter choice providing a non-diagonal working covariance matrix for second moment parameters providing potentially greater efficiency. Regression diagnostics based on this method are also implemented in the software. The mathematical background is briefly reviewed and the software is applied to medical data sets.  相似文献   

15.
We consider the problem of estimating the parameters of a distribution when the underlying events are themselves unobservable. The aim of the exercise is to perform a task (for example, search a web-site or query a distributed database) based on a distribution involving the state of nature, except that we are not allowed to observe the various “states of nature” involved in this phenomenon. In particular, we concentrate on the task of searching for an object in a set of N locations (or bins) {C 1, C 2,…, C N }, in which the probability of the object being in the location C i is p i , where P = [p 1, p 2,…, p N ] T is called the Target Distribution. Also, the probability of locating the object in the bin within a specified time, given that it is in the bin, is given by a function called the Detection function, which, in its most common instantiation, is typically, specified by an exponential function. The intention is to allocate the available resources so as to maximize the probability of locating the object. The handicap, however, is that the time allowed is limited, and thus the fact that the object is not located in bin C i within a specified time does not necessarily imply that the object is not in C i . This problem has applications in searching large databases, distributed databases, and the world-wide web, where the location of the files sought for are unknown, and in developing various military and strategic policies. All of the research done in this area has assumed the knowledge of the {p i }. In this paper we consider the problem of obtaining error bounds, estimating the Target Distribution, and allocating the search times when the {p i } are unknown. To the best of our knowledge, these results are of a pioneering sort - they are the first available results in this area, and are particularly interesting because, as mentioned earlier, the events concerning the Target Distribution, in themselves, are unobservable.
B. John Oommen (Corresponding author)Email:

Qingxin Zhu   Qingxin Zhu got his Bachelor’s degree in Mathematics in 1981 from Sichuan Normal University, China. He got the Master’s degree in Applied Mathematics from Beijing Institute of Technology in 1984. From 1984 to 1988 he was employed by the Southwest Technical Physics Institute. In 1988, he continued his higher education with the Department of Mathematics, University of Ottawa, Canada and got a PhD degree in 1993. From 1993 to 1996, he did postgraduate research and got a second Master’s degree in Computer Science from Carleton University, Canada. He is currently a Professor with the University of Electronics Science and Technology of China (UESTC). His research interests are Optimal Search Theory, Computer Applications, and Bioinformatics. B. John Oommen   Dr. John Oommen was born in Coonoor, India on 9 September 1953. He obtained his B.Tech. degree from the Indian Institute of Technology, Madras, India in 1975. He obtained his M.E. from the Indian Institute of Science in Bangalore, India in 1977. He then went on for his MS and PhD which he obtained from Purdue University, in West Lafayettte, Indiana in 1979 and 1982, respectively. He joined the School of Computer Science at Carleton University in Ottawa, Canada, in the 1981–1982 academic year. He is still at Carleton and holds the rank of a Full Professor. Since July 2006, he has been awarded the honorary rank of Chancellor's Professor, which is a lifetime award from Carleton University. His research interests include Automata Learning, Adaptive Data Structures, Statistical and Syntactic Pattern Recognition, Stochastic Algorithms and Partitioning Algorithms. He is the author of more than 280 refereed journal and conference publications, and is a Fellow of the IEEE and a Fellow of the IAPR. Dr. Oommen is on the Editorial Board of the IEEE Transactions on Systems, Man and Cybernetics, and Pattern Recognition.   相似文献   

16.

Context

Adopting IT innovation in organizations is a complex decision process driven by technical, social and economic issues. Thus, those organizations that decide to adopt innovation take a decision of uncertain success of implementation, as the actual use of a new technology might not be the one expected. The misalignment between planned and effective use of innovation is called assimilation gap.

Objective

This research aims at defining a quantitative instrument for measuring the assimilation gap and applying it to the case of the adoption of OSS.

Method

In this paper, we use the theory of path dependence and increasing returns of Arthur. In particular, we model the use of software applications (planned or actual) by stochastic processes defined by the daily amounts of files created with the applications. We quantify the assimilation gap by comparing the resulting models by measures of proximity.

Results

We apply and validate our method to a real case study of introduction of OpenOffice. We have found a gap between the planned and the effective use despite well-defined directives to use the new OS technology. These findings suggest a need of strategy re-calibration that takes into account environmental factors and individual attitudes.

Conclusions

The theory of path dependence is a valid instrument to model the assimilation gap provided information on strategy toward innovation and quantitative data on actual use are available.  相似文献   

17.
This paper presents an economic lot-sizing problem with perishable inventory and general economies of scale cost functions. For the case with backlogging allowed, a mathematical model is formulated, and several properties of the optimal solutions are explored. With the help of these optimality properties, a polynomial time approximation algorithm is developed by a new method. The new method adopts a shift technique to obtain a feasible solution of subproblem and takes the optimal solution of the subproblem as an approximation solution of our problem. The worst case performance for the approximation algorithm is proven to be (4*2½+5)/7. Finally, an instance illustrates that the bound is tight.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号