首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Bayesian inference techniques have been applied to the analysis of fluctuation of post-synaptic potentials in the hippocampus. The underlying statistical model assumes that the varying synaptic signals are characterized by mixtures of (unknown) numbers of individual gaussian, or normal, component distributions. Each solution consists of a group of individual components with unique mean values and relative probabilities of occurrence and a predictive probability density. The advantages of bayesian inference techniques over the alternative method of maximum likelihood estimation (MLE) of the parameters of an unknown mixture distribution include the following: (1) prior information may be incorporated in the estimation of model parameters; (2) conditional probability estimates of the number of individual components in the mixture are calculated; (3) flexibility exists in the extent to which the estimated noise standard deviation indicates the width of each component; (4) posterior distributions for component means are calculated, including measures of uncertainty about the means; and (5) probability density functions of the component distributions and the overall mixture distribution are estimated in relation to the raw grouped data, together with measures of uncertainty about these estimates. This expository report describes this novel approach to the unconstrained identification of components within a mixture, and provides demonstration of the usefulness of the technique in the context of both simulations and the analysis of distributions of synaptic potential signals.  相似文献   

2.
This paper presents a global damage detection and assessment algorithm based on a parameter estimation method using a finite-element model and the measured modal response of a structure. Damage is characterized as a reduction of the member constitutive parameter from a known baseline value. An optimization scheme is proposed to localize damaged parts of the structure. The algorithm accounts for the possibility of multiple solutions to the parameter estimation problem that arises from using spatially sparse measurements. Errors in parameter estimates caused by sensitivity to measurement noise are reduced by selecting a near-optimal measurement set from the data at each stage of the localization algorithm. Damage probability functions are computed upon completion of the localization process for candidate elements. Monte Carlo methods are used to compute the required probabilities based on the statistical distributions of the parameters for the damaged and the associated baseline structure. The algorithm is tested in a numerical simulation environment using a planar bridge truss as a model problem.  相似文献   

3.
A good understanding of environmental effects on structural modal properties is essential for reliable performance of vibration-based damage diagnosis methods. In this paper, a method of combining principal component analysis (PCA) and support vector regression (SVR) technique is proposed for modeling temperature-caused variability of modal frequencies for structures instrumented with long-term monitoring systems. PCA is first applied to extract principal components from the measured temperatures for dimensionality reduction. The predominant feature vectors in conjunction with the measured modal frequencies are then fed into a support vector algorithm to formulate regression models that may take into account thermal inertia effect. The research is focused on proper selection of the hyperparameters to obtain SVR models with good generalization performance. A grid search method with cross validation and a heuristic method are utilized for determining the optimal values of SVR hyperparameters. The proposed method is compared with the method directly using measurement data to train SVR models and the multivariate linear regression (MLR) method through the use of long-term measurement data from a cable-stayed bridge. It is shown that PCA-compressed features make the training and validation of SVR models more efficient in both model accuracy and computational costs, and the formulated SVR model performs much better than the MLR model in generalization performance. When continuously measured data is available, the SVR model formulated taking into account thermal inertia effect can achieve more accurate prediction than that without considering thermal inertia effect.  相似文献   

4.
I. Klugkist, O. Laudy, and H. Hoijtink (2005) presented a Bayesian approach to analysis of variance models with inequality constraints. Constraints may play 2 distinct roles in data analysis. They may represent prior information that allows more precise inferences regarding parameter values, or they may describe a theory to be judged against the data. In the latter case, the authors emphasized the use of Bayes factors and posterior model probabilities to select the best theory. One difficulty is that interpretation of the posterior model probabilities depends on which other theories are included in the comparison. The posterior distribution of the parameters under an unconstrained model allows one to quantify the support provided by the data for inequality constraints without requiring the model selection framework. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

5.
Sib pair-selection strategies, designed to identify the most informative sib pairs in order to detect a quantitative-trait locus (QTL), give rise to a missing-data problem in genetic covariance-structure modeling of QTL effects. After selection, phenotypic data are available for all sibs, but marker data-and, consequently, the identity-by-descent (IBD) probabilities-are available only in selected sib pairs. One possible solution to this missing-data problem is to assign prior IBD probabilities (i.e., expected values) to the unselected sib pairs. The effect of this assignment in genetic covariance-structure modeling is investigated in the present paper. Two maximum-likelihood approaches to estimation are considered, the pi-hat approach and the IBD-mixture approach. In the simulations, sample size, selection criteria, QTL-increaser allele frequency, and gene action are manipulated. The results indicate that the assignment of prior IBD probabilities results in serious estimation bias in the pi-hat approach. Bias is also present in the IBD-mixture approach, although here the bias is generally much smaller. The null distribution of the log-likelihood ratio (i.e., in absence of any QTL effect) does not follow the expected null distribution in the pi-hat approach after selection. In the IBD-mixture approach, the null distribution does agree with expectation.  相似文献   

6.
A Bayesian framework incorporating Markov chain Monte Carlo (MCMC) for updating the parameters of a sediment entrainment model is presented. Three subjects were pursued in this study. First, sensitivity analyses were performed via univariate MCMC. The results reveal that the posteriors resulting from two- and three-chain MCMC were not significantly different; two-chain MCMC converged faster than three chains. The proposal scale factor significantly affects the rate of convergence, but not the posteriors. The sampler outputs resulting from informed priors converged faster than those resulting from uninformed priors. The correlation coefficient of the Gram–Charlier (GC) probability density function (PDF) is a physical constraint imposed on MCMC in which a higher correlation would slow the rate of convergence. The results also indicate that the parameter uncertainty is reduced with increasing number of input data. Second, multivariate MCMC were carried out to simultaneously update the velocity coefficient C and the statistical moments of the GC PDF. For fully rough flows, the distribution of C was significantly modified via multivariate MCMC. However, for transitional regimes the posterior values of C resulting from univariate and multivariate MCMC were not significantly different. For both rough and transitional regimes, the differences between the prior and posterior distributions of the statistical moments were limited. Third, the practical effect of updated parameters on the prediction of entrainment probabilities was demonstrated. With all the parameters updated, the sediment entrainment model was able to compute more accurately and realistically the entrainment probabilities. The present work offers an alternative approach to estimating the hydraulic parameters not easily observed.  相似文献   

7.
The estimation of orientation distribution functions (ODFs) from discrete orientation data, as produced by electron backscatter diffraction or crystal plasticity micromechanical simulations, is typically achieved via techniques such as the Williams–Imhof–Matthies–Vinel (WIMV) algorithm or generalized spherical harmonic expansions, which were originally developed for computing an ODF from pole figures measured by X-ray or neutron diffraction. These techniques rely on ad-hoc methods for choosing parameters, such as smoothing half-width and bandwidth, and for enforcing positivity constraints and appropriate normalization. In general, such approaches provide little or no information-theoretic guarantees as to their optimality in describing the given dataset. In the current study, an unsupervised learning algorithm is proposed which uses a finite mixture of Bingham distributions for the estimation of ODFs from discrete orientation data. The Bingham distribution is an antipodally-symmetric, max-entropy distribution on the unit quaternion hypersphere. The proposed algorithm also introduces a minimum message length criterion, a common tool in information theory for balancing data likelihood with model complexity, to determine the number of components in the Bingham mixture. This criterion leads to ODFs which are less likely to overfit (or underfit) the data, eliminating the need for a priori parameter choices.  相似文献   

8.
Stochastic fluctuations and systematic errors severely restrict the potential of multispectral acquisition to improve scatter correction by energy-dependent processing in high-resolution positron emission tomography (PET). To overcome this limitation, three pre-processing approaches which reduce stochastic fluctuations and systematic errors without degrading spatial resolution were investigated: statistical variance was reduced by smoothing acquired data in energy space, systematic errors due to nonuniform detector efficiency were minimized by normalizing the data in the spatial domain and the overall variance was further reduced by selecting an optimal pre-processing sequence. Selection of the best protocol to reduce stochastic fluctuations entailed comparisons between four smoothing algorithms (prior constrained (PC) smoothing, weighted smoothing (WS), ideal low-pass filtering (ILF) and mean median (MM) smoothing) and permutations of three pre-processing procedures (smoothing, normalization and subtraction of random events). Results demonstrated that spectral smoothing by WS, ILF and MM efficiently reduces the statistical variance in both the energy and spatial domains without observable spatial resolution loss. The ILF algorithm was found to be the most convenient in terms of simplicity and efficiency. Regardless of the position of subtraction of randoms in the sequence, reduction of the systematic errors by normalization followed by spectral smoothing to suppress statistical noise produced the best results. However, subtraction of random events first in the sequence reduces computation load by half since the need to pre-process this distribution before subtraction is removed. In summary, normalizing data in the spatial domain and smoothing data in energy space are essential steps required to reduce systematic errors and statistical variance independently without degrading spatial resolution of multispectral PET data.  相似文献   

9.
Differential mortality exists in the United States both between racial/ethnic groups and along gradients of socioeconomic status. The specification of statistical models for processes underlying these observed disparities has been hindered by the fact that social and economic quantities are distributed in a highly nonrandom manner throughout the population. We sought to provide a substantive foundation for model development by representing the shape of the income-mortality relation by racial/ethnic group. We used data on black and white men and women from the longitudinal component of the National Health Interview Survey (NHIS), 1986-1990, which provided 1,191,824 person-years of follow-up and 12,165 mortal events. To account for family size when considering income, we used the ratio of annual family income to the federal poverty line for a family of similar composition. To avoid unnecessary categorizations and prior assumptions about model form, we employed kernel smoothing techniques and calculated the continuous mortality surface across dimensions of adjusted income and age for each of the gender and racial/ethnic groups. Representing regions of equal mortality density with contour plots, we observed interactions that need to be accommodated by any subsequent statistical models. We propose two general theories that provide a foundation for more elaborate and testable hypotheses in the future.  相似文献   

10.
Massive amounts of data are generated during hot strip production in the hot-rolling metal forming process; the resultant dataset is sufficient for model learning in strip steel crown prediction. However, the data of high-grade nonoriented silicon steel are limited, and the rolling process parameters differ, resulting in poor crown prediction. Herein, a model based on the whale optimization algorithm and transfer learning to predict the crowns of hot-rolled high-grade nonoriented silicon strip steel is presented. The model is composed of convolutional and linear layers. The whale optimization algorithm is used to optimize the hyperparameters and obtain an optimal model during pretraining. The model is then fine tuned based on insufficient silicon steel data to achieve model migration. The application results show that the correlation coefficient reaches 0.993, which is the highest prediction accuracy among the comparison models. Furthermore, the root mean square error is 1.14 μm, and the hit rate within 4.0 μm of the crown deviation reaches 99.502%. In addition, the influences of four parameters on the crown of the silicon steel strip are studied based on the response surfaces. The results indicate that the proposed model can efficiently predict silicon steel strip crowns.  相似文献   

11.
An integrated procedure based on a direct adaptive control algorithm is applied to structural systems for both vibration suppression and damage detection. The wider class of noncollocated actuator-sensor schemes is investigated through parameterized linear functions of the state variables that preserve the minimum phase property of the system. A larger number of mechanical parameters are shown to be identifiable in noncollocated configurations. Proper output selection allowing for model reference control and tracking error based parameters estimation under persistent excitation is described. Using full-state feedback, these capabilities are effectively exploited for oscillation reduction and health monitoring of uncertain multi-degree-of-freedom (MDOF) shear-type structures.  相似文献   

12.
We introduce a fast block-iterative maximum a posteriori (MAP) reconstruction algorithm and apply it to four-dimensional reconstruction of gated SPECT perfusion studies. The new algorithm, called RBI-MAP, is based on the rescaled block iterative EM (RBI-EM) algorithm. We develop RBI-MAP based on similarities between the RBI-EM, ML-EM and MAP-EM algorithms. RBI-MAP requires far fewer iterations than MAP-EM, and so should result in acceleration similar to that obtained from using RBI-EM or OS-EM as opposed to ML-EM. When complex four-dimensional clique structures are used in the prior, however, evaluation of the smoothing prior dominates the processing time. We show that a simple scheme for updating the prior term in the heart region only for RBI-MAP results in savings in processing time of a factor of six over MAP-EM. The RBI-MAP algorithm incorporating 3D collimator-detector response compensation is demonstrated on a simulated 99mTc gated perfusion study. Results of RBI-MAP are compared with RBI-EM followed by a 4D linear filter. For the simulated study, we find that RBI-MAP provides consistently higher defect contrast for a given degree of noise smoothing than does filtered RBI-EM. This is an indication that RBI-MAP smoothing does less to degrade resolution gained from 3D detector response compensation than does a linear filter. We conclude that RBI-MAP can provide smooth four-dimensional reconstructions with good visualization of heart structures in clinically realistic processing times.  相似文献   

13.
This paper presents the specification and estimation of a model based on the mechanistic empirical Pavement Design Guide (PDG) for estimating resilient modulus of fine-grained soils by using common soil parameters and by combining two different data sources: a database developed with Hawaiian fine-grained soils and data extracted from the Long Team Pavement Performance database for fine-grained subgrade soils. Two statistical techniques are combined to estimate the model parameters: joint estimation and mixed effects. Joint estimation considers multiple databases and allows identification of influential parameters that may be present only in some but not all databases whereas the mixed-effects statistical estimation approach is used to account for the within-group correlation between observations. The general structure of the PDG model is found acceptable if an allowance is made for the compaction level in addition to the saturation level in the PDG sigmoidal function. The resulting model contains parameters that are statistically significant and is more robust in that it can be used under a wider range of conditions than would have been possible if only one data source was available.  相似文献   

14.
The focus of this paper is to demonstrate the application of a recently developed Bayesian state estimation method to the recorded seismic response of a building and to discuss the issue of model selection. The method, known as the particle filter, is based on stochastic simulation. Unlike the well-known extended Kalman filter, it is applicable to highly nonlinear systems with non-Gaussian uncertainties. The particle filter is applied to strong motion data recorded in the 1994 Northridge earthquake in a seven-story hotel whose structural system consists of nonductile reinforced-concrete moment frames, two of which were severely damaged during the earthquake. We address the issue of model selection. Two identification models are proposed: a time-varying linear model and a simplified time-varying nonlinear degradation model. The latter is derived from a nonlinear finite-element model of the building previously developed at Caltech. For the former model, the resulting performance is poor since the parameters need to vary significantly with time in order to capture the structural degradation of the building during the earthquake. The latter model performs better because it is able to characterize this degradation to a certain extent even with its parameters fixed. For this case study, the particle filter provides consistent state and parameter estimates, in contrast to the extended Kalman filter, which provides inconsistent estimates. It is concluded that for a state estimation procedure to be successful, at least two factors are essential: an appropriate estimation algorithm and a suitable identification model.  相似文献   

15.
The following sequence—internal condition → symptom perception → appraisal → decision—models various symptom-based self-regulation processes. A formal mathematical model describes the first three steps by continuous variables and the decisions at the fourth step by binary variables. The stochastic transitions between the sequential steps are quantified by transition probabilities. The model is illustrated by blood glucose level estimation and detection and treatment of hypoglycemia in 78 patients with insulin-dependent diabetes mellitus. These patients made 50 to 70 data collection trials over 3 to 4 weeks recording perceived symptoms, cognitive-motor performance, subjective estimates of blood glucose, decisions about treatment of hypoglycemia, and driving. A statistical estimation of the model's parameters demonstrates the utility of this approach for understanding the awareness, detection, and treatment of hypoglycemia as a process of symptom-based decision making. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

16.
The data obtained from the clinical practice are not unanimous, because we cannot assume the patient condition constant. It is, however, important to estimate real time patient condition individually to make more appropriate decision on treatment. In the classical statistical approach to the estimation problem, the patient condition is, so to speak, viewed as an unknown constant, which does not change with time. This is a wrong assumption in the clinical world. In Bayesian statistics, an available datum or information prior to the event (prior probability) is used in the estimation process to obtain the posterior probability. Furthermore, datum or information gained in the clinical setting is considered as a fact instead of a random variable and is utilized to join itself with prior probability. In other word, in Bayesian statistics, estimation is changed to prediction. We utilize our subjective thinking in making decision in our daily practice of anesthesia. In fact, subjectivity is based on the beliefs or opinions, depending on the experience and information possessed by the anesthesiologist who is making the assessment. In Bayesian estimation, subjectivity, a measure of belief is incorporated as a prior probability which, I believe, is more flexible in dealing with the real world problems than any other statistical estimation.  相似文献   

17.
Considering that the performance of a genetic algorithm (GA) is affected by many factors and their relationships are complex and hard to be described, a novel fuzzy-based adaptive genetic algorithm (FAGA) combined a new artificial immune system with fuzzy system theory is proposed due to the fact fuzzy theory can describe high complex problems. In FAGA, immune theory is used to improve the performance of selection operation. And,crossover probability and mutation probability are adjusted dynamically by fuzzy inferences, which are developed according to the heuristic fuzzy relationship between algorithm performances and control parameters. The experiments show that FAGA can efficiently overcome shortcomings of GA, I.e., premature and slow, and obtain better results than two typical fuzzy Gas. Finally, FAGA was used for the parameters estimation of reaction kinetics model and the satisfactory result was obtained.  相似文献   

18.
In applications of statistical methods to medical diagnosis, information on patients' diseases and symptoms is collected and the resulting data-base is used to diagnose new patients. The data-structure is complicated by a number of factors, two of which are examined here: selection bias and unstable population. Under reasonable conditions, no correction for selection bias is required when assessing probabilities for diseases based on symptom information, and it is suggested that these "diagnostic distributions" should form the principal object of study. Transformation of these distributions under changing population structure is considered and shown to take on a simple form in many situations. It is argued that the prevailing paradigm of diagnostic statistics, which concentrates on incidence of symptoms for given disease, is largely inappropriate and should be replaced by an emphasis on diagnostic distributions. The generalized logistic model is seen to fit naturally into the new framework.  相似文献   

19.
This paper examines modeling and inference questions for experiments in which different subsets of a set of kappa possibly dependent components are tested in r different environments. In each environment, the failure times of the set of components on test is assumed to be governed by a particular type of multivariate exponential (MVE) distribution. For any given component tested in several environments, it is assumed that its marginal failure rate varies from one environment to another via a change of scale between the environments, resulting in a joint MVE model which links in a natural way the applicable MVE distributions describing component behavior in each fixed environment. This study thus extends the work of Proschan and Sullo (1976) to multiple environments and the work of Kvam and Samaniego (1993) to dependent data. The problem of estimating model parameters via the method of maximum likelihood is examined in detail. First, necessary and sufficient conditions for the identifiability of model parameters are established. We then treat the derivation of the MLE via a numerically-augmented application of the EM algorithm. The feasibility of the estimation method is demonstrated in an example in which the likelihood ratio test of the hypothesis of equal component failure rates within any given environment is carried out.  相似文献   

20.
An artificial neural model is used to estimate the natural sediment discharge in rivers in terms of sediment concentration. This is achieved by training the network to extrapolate several natural streams data collected from reliable sources. The selection of water and sediment variables used in the model is based on the prior knowledge of the conventional analyses, based on the dynamic laws of flow and sediment. Choosing an appropriate neural network structure and providing field data to that network for training purpose are addressed by using a constructive back-propagation algorithm. The model parameters, as well as fluvial variables, are extensively investigated in order to get the most accurate results. In verification, the estimated sediment concentration values agree well with the measured ones. The model is evaluated by applying it to other groups of data from different rivers. In general, the new approach gives better results compared to several commonly used formulas of sediment discharge.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号