首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We investigate the problem of using continuous features in the maximum entropy (MaxEnt) model. We explain why the MaxEnt model with the moment constraint (MaxEnt-MC) works well with binary features but not with the continuous features. We describe how to enhance constraints on the continuous features and show that the weights associated with the continuous features should be continuous functions instead of single values. We propose a spline-based solution to the MaxEnt model with non-linear continuous weighting functions and illustrate that the optimization problem can be converted into a standard log-linear model at a higher-dimensional space. The empirical results on two classification tasks that contain continuous features are reported. The results confirm our insight and show that our proposed solution consistently outperforms the MaxEnt-MC model and the bucketing approach with significant margins.  相似文献   

2.
It is common in epidemiology and other fields that the analyzing data is collected with error-prone observations and the variances of the measurement errors change across observations. Heteroscedastic measurement error (HME) models have been developed for such data. This paper extends the structural HME model to situations in which the observations jointly follow scale mixtures of normal (SMN) distribution. We develop the EM algorithm to compute the maximum likelihood estimates for the model with and without equation error respectively, and derive closed forms of asymptotic variances. We also conduct simulations to verify the effective of the EM estimates and confirm their robust behaviors based on heavy-tailed SMN distributions. A practical application is reported for the data from the WHO MONICA Project on cardiovascular disease.  相似文献   

3.
A new likelihood based AR approximation is given for ARMA models. The usual algorithms for the computation of the likelihood of an ARMA model require O(n) flops per function evaluation. Using our new approximation, an algorithm is developed which requires only O(1) flops in repeated likelihood evaluations. In most cases, the new algorithm gives results identical to or very close to the exact maximum likelihood estimate (MLE). This algorithm is easily implemented in high level quantitative programming environments (QPEs) such as Mathematica, MatLab and R. In order to obtain reasonable speed, previous ARMA maximum likelihood algorithms are usually implemented in C or some other machine efficient language. With our algorithm it is easy to do maximum likelihood estimation for long time series directly in the QPE of your choice. The new algorithm is extended to obtain the MLE for the mean parameter. Simulation experiments which illustrate the effectiveness of the new algorithm are discussed. Mathematica and R packages which implement the algorithm discussed in this paper are available [McLeod, A.I., Zhang, Y., 2007. Online supplements to “Faster ARMA Maximum Likelihood Estimation”, 〈http://www.stats.uwo.ca/faculty/aim/2007/faster/〉]. Based on these package implementations, it is expected that the interested researcher would be able to implement this algorithm in other QPEs.  相似文献   

4.
Keeping in view the non-probabilistic nature of experiments, two new measures of weighted fuzzy entropy have been introduced and to check their authenticity, the essential properties of these measures have been studied. Under the fact that measures of entropy can be used for the study of optimization principles when certain partial information is available, we have applied the existing as well as the newly introduced weighted measures of fuzzy entropy to study the maximum entropy principle.  相似文献   

5.
The maximum entropy principle (MEP) is used to generate a natural probability distribution among the many possible that have the same moment conditions. The MEP can accommodate higher order moment information and therefore facilitate a higher quality PDF model. The performance of the MEP for PDF estimation is studied by using more than four moments. For the case with four moments, the results are compared with those by the Pearson system. It is observed that as accommodating higher order moment, the estimated PDF converges to the original one. A sensitivity analysis formulation of the failure probability based on the MEP is derived for reliability-based design optimization (RBDO) and the accuracy is compared with that by finite difference method (FDM). Two RBDO examples including a realistic three-dimensional wing design are solved by using the derived sensitivity formula and the MEP-based moment method. The results are compared with other methods such as TR-SQP, FAMM + Pearson system, FFMM + Pearson system in terms of accuracy and efficiency. It is also shown that an improvement in the accuracy by including more moment terms can increase numerical efficiency of optimization for the three-dimensional wing design. The moment method equipped with the MEP is found flexible and well adoptable for reliability analysis and design.  相似文献   

6.
In this paper, we propose a test of fit based on maximum entropy. The asymptotic distribution of the proposed test statistic is established and a corrected form for small and medium sample sizes is furnished. The performance of the test is investigated through extensive Monte Carlo simulations. Real examples are also presented and analyzed.  相似文献   

7.
The problem of consistent estimation in measurement error models in a linear relation with not necessarily normally distributed measurement errors is considered. Three possible estimators which are constructed as different combinations of the estimators arising from direct and inverse regression are considered. The efficiency properties of these three estimators are derived and the effect of non-normally distributed measurement errors is analyzed. A Monte-Carlo experiment is conducted to study the performance of these estimators in finite samples.  相似文献   

8.
A useful class of partially nonstationary vector autoregressive moving average (VARMA) models is considered with regard to parameter estimation. An exact maximum likelihood (EML) approach is developed on the basis of a simple transformation applied to the error-correction representation of the models considered. The employed transformation is shown to provide a standard VARMA model with the important property that it is stationary. Parameter estimation can thus be carried out by applying standard EML methods to the stationary VARMA model obtained from the error-correction representation. This approach resolves at least two problems related to the current limited availability of EML estimation methods for partially nonstationary VARMA models. Firstly, it resolves the apparent impossibility of computing the exact log-likelihood for such models using currently available methods. And secondly, it resolves the inadequacy of considering lagged endogenous variables as exogenous variables in the error-correction representation. Theoretical discussion is followed by an example using a popular data set. The example illustrates the feasibility of the EML estimation approach as well as some of its potential benefits in cases of practical interest which are easy to come across. As in the case of stationary models, the proposed EML method provides estimated model structures that are more reliable and accurate than results produced by conditional methods.  相似文献   

9.
In this paper we present a comprehensive Maximum Entropy (MaxEnt) procedure for the classification tasks. This MaxEnt is applied successfully to the problem of estimating the probability distribution function (pdf) of a class with a specific pattern, which is viewed as a probabilistic model handling the classification task. We propose an efficient algorithm allowing to construct a non-linear discriminating surfaces using the MaxEnt procedure. The experiments that we carried out shows the performance and the various advantages of our approach.  相似文献   

10.
The problem of estimating the width of the symmetric uniform distribution on the line when data are measured with normal additive error is considered. The main purpose is to discuss the efficiency of the maximum likelihood estimator and the moment method estimator. It is shown that the model is regular and that the maximum likelihood estimator is more efficient than the moment method estimator. A sufficient condition is also given for the existence of both estimators.  相似文献   

11.
Maximum likelihood estimation has a rich history. It has been successfully applied to many problems including dynamical system identification. Different approaches have been proposed in the time and frequency domains. In this paper we discuss the relationship between these approaches and we establish conditions under which the different formulations are equivalent for finite length data. A key point in this context is how initial (and final) conditions are considered and how they are introduced in the likelihood function.  相似文献   

12.
依据独立共同可别粒子体系的熵与配分函数的关系,采用自适应模糊神经网络的方法,以元素原子量和其电子层数为参数,关联阳离子标准熵。利用减法聚类算法确定模糊神经网络的结构,并结合模糊推理系统调整网络参数,仿真的结果令人满意。成功地关联了固体化合物中70种阳离子的标准熵。在此基础上,预报目前尚缺的17种阳离子的标准熵。自适应模糊神经网络可望成为研究元素和化合物构效关系的辅助手段。  相似文献   

13.
While adaptive control theory has been used in numerous applications to achieve given system stabilisation or command following criteria without excessive reliance on mathematical models, the ability to obtain a predictable transient performance is still an important problem – especially for applications to safety-critical systems and when there is no a-priori knowledge on upper bounds of existing system uncertainties. To address this problem, we present a new approach to improve the transient performance of adaptive control architectures. In particular, our approach is predicated on a novel controller architecture, which involves added terms in the update law entitled artificial basis functions. These terms are constructed through a gradient optimisation procedure to minimise the system error between an uncertain dynamical system and a given reference model during the learning phase of an adaptive controller. We provide a detailed stability analysis of the proposed approach, discuss the practical aspects of its implementation, and illustrate its efficacy on a numerical example.  相似文献   

14.
Assuming data domains are partially ordered, we define the partially ordered relational algebra (PORA) by allowing the ordering predicate ? to be used in formulae of the selection operator σ. We apply Paredaens and Bancilhon's Theorem to examine the expressiveness of the PORA, and show that the PORA expresses exactly the set of all possible relations which are invariant under order-preserving automorphisms of databases. The extension is consistent with the two important extreme cases of unordered and linearly ordered domains. We also investigate the three hierarchies of: (1) computable queries, (2) query languages and (3) partially ordered domains, and show that there is a one-to-one correspondence between them.  相似文献   

15.
A complex system is a system composed of many dynamic elements with mutual interactions. This paper proposes a unified approach for the design of an information processing system using a complex system. The method of design is based on the maximum entropy principle. After a detailed explanation, the proposed method is applied to the design of a spatial filter using a complex system. This work was presented, in part, at the International Symposium on Artificial Life and Robotics, Oita, Japan, February 18–20, 1996.  相似文献   

16.
A new approach to maximum entropy tomographic image reconstruction is presented. It is shown that by using a finite-dimensional subspace, one can obtain an approximation to the solution of a maximum entropy optimization problem, set inL 2 D. An example of an appropriate finite element subspace for a two-dimensional parallel beam projection geometry is examined. Particular attention is paid to the case where the x-ray projection data are sparse. In the current work, this means that the number of projections is small (in practice, perhaps only 5–20). A priori information in the form of known maximum and minimum densities of the materials being scanned is built into the model. A penalty function, added to the entropy term, is used to control the residual error in meeting the projection measurements. The power of the technique is illustrated by a sparse data reconstruction and the resulting image is compared to those obtained by a conventional method.  相似文献   

17.
Process capability analysis has been widely applied in the field of quality control to monitor the performance of industrial processes. In practice, lifetime performance index CL is a popular means to assess the performance and potential of their processes, where L is the lower specification limit. Nevertheless, many processes possess a non-normal lifetime model, the assumption of normality is often erroneous. Progressively censoring scheme is quite useful in many practical situations where budget constraints are in place or there is a demand for rapid testing. The study will apply data transformation technology to constructs a maximum likelihood estimator (MLE) of CL under the Burr XII distribution based on the progressively type II right censored sample. The MLE of CL is then utilized to develop a new hypothesis testing procedure in the condition of known L. Finally, we give two examples to illustrate the use of the testing procedure under given significance level α.  相似文献   

18.
This paper is concerned with the application of a minimum principle derived for general nonlinear partially observable exponential-of-integral control problems, to solve linear-exponential-quadratic-Gaussian problems. This minimum principle is the stochastic analog of Pontryagin's minimum principle for deterministic systems. It consists of an information state equation, an adjoint process governed by a stochastic partial differential equation with terminal condition, and a Hamiltonian functional. Two methods are employed to obtain the optimal control law. The first method appeals to the well-known approach of completing the squares, by first determining the optimal control law that minimizes the Hamiltonian functional. The second method provides significant insight into relations with the HamiltoniJacobi approach associated with completely observable exponential-of-integral control problems. These methods of solution are particularly attractive because they do not assume a certainty equivalence principle, hence they can be used to solve nonlinear problems as well.  相似文献   

19.
Dmitri V. Malakhov 《Calphad》2011,35(1):142-147
The historically first geometrical method pioneered by Bonnier and Caboz to evaluate the Gibbs energy of a ternary solution from the Gibbs energies of binary solutions correctly reproduces the configurational entropy, but not the regular term. In contrast, the constructs suggested by Toop, Kohler, Colinet and Muggianu result in the correct zero-order interaction parameter, but underestimate the configurational entropy. It is demonstrated that it is always possible to select compositions of the binary solutions reproducing both the regular term and the ideal entropy of mixing. A consistent method of finding these compositions is developed.  相似文献   

20.
 Generalizing the entropy of fuzzy partitions, the entropy of a partition in recently introduced product MV algebra has been studied. The least common refinement of two partitions is defined and the algebraic properties of the entropies and conditional entropies are examined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号